Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science Technology

CNFET Rivals Silicon Performance 78

Baldrson writes "Applied Physics Letters is carrying a paper on a CNFET (carbon nanotube field-effect transistors) advance that now rivals silicon performance for both n and p type devices. There is also a New York Times article in which it is reported that "it would be two to three more years before I.B.M. was ready to work on prototypes of future nanotube chips and as many as 10 years before they would be commercially available". This is may be what's at the end of the road for CMOS."
This discussion has been archived. No new comments can be posted.

CNFET Rivals Silicon Performance

Comments Filter:
  • Exciting times (Score:2, Interesting)

    by inkfox ( 580440 ) on Monday May 20, 2002 @10:57PM (#3555784) Homepage
    This is just yet another reason to be excited about technology.

    On top of this, look at the on-chip SMP work that the major processor manufacturers are working on. They're working on being able to run multiple virtual CPUs, sharing the same execution pipelines and caches, so when one thread is doing a lot of multiplication work, another may be able to use the addition and general flow pipelines. Allegedly, in simulations, the multiple virtual CPUs end up executing at about 70% of the efficiency of real, individual CPUs, all with very little extra silicon.

    On top of that, manufacturing is improving to the point where yields are very, very high. This means it's becoming more fesible to make larger and larger dies with more and more on them without significant failure rates. I think we'll soon see larger caches and wider buses. 64-bit CPUs may be a brief stepping stone to 128- and higher-bit.

    Add to this the current focus of Linux, which is Linux on mainframe architectures. The thing is, the very same principles that make mainframes such wonderful beasts are what we're starting to see in the hardware we'll be seeing in the near future. The multi-threaded hardware above, and split-bus architectures both have mainframe parallels. Linux should be ready to take advantage of the new hardware years before Windows is making significant use of it.

    Lastly, the increasing popularity of the Mac and the commoditization of game system components means that we're seeing more and more markets for faster general purpose CPUs. This means more competition and, because of that, better funding for research.

    We may yet push ahead of the oft-misquoted version of Moore's law! :)

  • by cybrpnk2 ( 579066 ) on Monday May 20, 2002 @11:01PM (#3555799) Homepage
    Another advanced technology which may replace ilicon in the near term is spintronics [sciam.com]. These devices have one advantage over nanotube transistors in that they may easily implement quantum computing [caltech.edu]...
  • by Anonymous Coward on Monday May 20, 2002 @11:09PM (#3555832)
    Since the IBM experiments (and others done elsewhere) almost always use single wall carbon nanotubes, there are a few issues of practical nature I wonder about with this technology.

    One is that single wall nanotubes are oxygen sensitive. Specifically, contact with O2 will cause single site defects in the nanotube structure, thus causing the whole nanotube to lose its electronic properties. It makes me wonder about how they will package these "molecular transistors" such that O2 can't get to it, but the encapsulation of the nanotube doesn't cause it to short out.

    Another is that when these things heat up, they do ignite. As we've seen with the light-based ignition shown in Science and here on slashdot, these materials do burn. The above mentioned oxygen reaction sometimes causes the semi-conducting nanotubes to become insulators, thus they heat up, ignite, and disintegrate. So I'm wondering if frying one's nanotube-based chip would be more than just a figurative term if this happened.

    Finally, there is the fabrication issue. I know that in the near future, one can make kilotons of nanotubes, and probably even kilograms of single wall nanotubes today (maybe 2kg a year, but you don't need that much if you only need 1 nanotube), but how are you going to fabricate them into architechures onto chips with existing chip fabrication technology?

    Maybe IBM has all this worked out. I do have to remember that what they've published today is what they already have covered in patents and what they've been working on already for several months to one year. They don't publish unless they've got more going on AND if they already have the technology protected.
  • low power (Score:2, Interesting)

    by SlugLord ( 130081 ) on Monday May 20, 2002 @11:37PM (#3555948)
    an entertaining idea, even if it simply reduces power output... I can't get the whole text, but it seems like 1 V transistors should allow faster clock speeds (especially since processor speed isn't really much related to transistor speed but rather to cooling speed)
  • by Anonymous Coward on Tuesday May 21, 2002 @12:20AM (#3556096)
    You sound like a Happy Fun Ball.

    It will take a good while before we hit any of the limits that will mean the end of the road for processor architectures. Not in the least because I have absolutely no idea what you mean by processor architecture, because that denotes a rather wide field.

    Your next paragraph about deep pinelines and SIMD and modern graphics hardware comes down to two things. First they have very little to do with eachother. Second, that these are optimizations to deal with limitations imposed by economic circumstances. If memory was as fast as L1 cache and the x86 ISA was not the dominant ISA, then we would have a whole different chip. To suggest that the parallelization made available by specialized graphics hardware and multimedia ISA extensions ignores the fact that most processing is not so easily parallelizable, especially not at the resolution offered by the forementioned optimizations.

    It is nevertheless conceivable that current economic trends will continue and that it will be found to be more economical to optimize at the chip level rather than at the bus level, but so far we have failure at both the chip level (the pain that is Itanium) and on the bus level (RAMBUS) and I believe we cannot conclude either way.
  • Re:Exciting times (Score:2, Interesting)

    by Anonymous Coward on Tuesday May 21, 2002 @01:39AM (#3556419)
    > This means it's becoming more fesible to make larger and larger dies with more and more on them without significant failure rates...

    Well, actually, the IA64-1 is spec'ing at 130 watts now. That's alot of heat. Some CPUs already have clock cycle stealing termal circuits to moderate die temp. When actually called on to work, they slow down - alot.

    I'm sure, well I hope, they have the tricks to take care of this problem. I hope so, and I hope they deal with it soon. I'd really, really, hate to see my PC consume more wattage than my Reef Aquarium.

    > They're working on being able to run multiple virtual CPUs, sharing the same execution pipelines and caches, so when one thread is doing a lot of multiplication work, another may be able to use the addition and general flow pipelines.

    I saw this on the IA64 roadmap. Something of an admission that deeper pipelineing may be a diminishing return for typical program threads. But if the the throughput of virtual SMP is only 70% of a chip not in this mode, then it doesn't seem like a good way to fill all the instruction slots. Maybe plain 'ol SMP on smaller chips is a better way.

    Then, of course, there's A-SMP. With memory and celeron prices being what they are, a big boost in throughput could come with brighter system board designs. You could start coding memory op codes, like block copy and block set functions into the memory controller. You could get I2O out of the propriatary and overpriced gutter and put it to use. There are lots of things that could be done, today, to make it all go faster.
  • by Anonymous Coward on Tuesday May 21, 2002 @01:46AM (#3556442)
    Linux wasn't really meant to ever be called that by anyone other than Linus. He originally released it under the name Freax (FRee + frEAky + uniX), but Ari Lemmke, the admin of the FTP server Linus posted the original Linux source tree to didn't really like that name, so he changed it back. Linus himself was actually afraid that people would call him an egomaniac because of the name :)

    Feel free to play again, Troll.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...