Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science Technology

CNFET Rivals Silicon Performance 78

Baldrson writes "Applied Physics Letters is carrying a paper on a CNFET (carbon nanotube field-effect transistors) advance that now rivals silicon performance for both n and p type devices. There is also a New York Times article in which it is reported that "it would be two to three more years before I.B.M. was ready to work on prototypes of future nanotube chips and as many as 10 years before they would be commercially available". This is may be what's at the end of the road for CMOS."
This discussion has been archived. No new comments can be posted.

CNFET Rivals Silicon Performance

Comments Filter:
  • Come on... (Score:3, Funny)

    by rich22 ( 156003 ) on Monday May 20, 2002 @10:52PM (#3555756) Journal
    When did Yahoo start posting these things before slashdot?

    Yahoo - news for nerds, who have a bedtime.
  • Good to hear that these tubes are not as noisy as was previously observed. They always seemed promising, but with the electrical noise that accompanied transmissions, they just weren't practical.

    Very good news!
  • Repeat? (Score:1, Redundant)

    by Cheesemaker ( 36551 )
    What's the difference between this process and the one reported a few hours ago from MSNBC?
  • Exciting times (Score:2, Interesting)

    by inkfox ( 580440 )
    This is just yet another reason to be excited about technology.

    On top of this, look at the on-chip SMP work that the major processor manufacturers are working on. They're working on being able to run multiple virtual CPUs, sharing the same execution pipelines and caches, so when one thread is doing a lot of multiplication work, another may be able to use the addition and general flow pipelines. Allegedly, in simulations, the multiple virtual CPUs end up executing at about 70% of the efficiency of real, individual CPUs, all with very little extra silicon.

    On top of that, manufacturing is improving to the point where yields are very, very high. This means it's becoming more fesible to make larger and larger dies with more and more on them without significant failure rates. I think we'll soon see larger caches and wider buses. 64-bit CPUs may be a brief stepping stone to 128- and higher-bit.

    Add to this the current focus of Linux, which is Linux on mainframe architectures. The thing is, the very same principles that make mainframes such wonderful beasts are what we're starting to see in the hardware we'll be seeing in the near future. The multi-threaded hardware above, and split-bus architectures both have mainframe parallels. Linux should be ready to take advantage of the new hardware years before Windows is making significant use of it.

    Lastly, the increasing popularity of the Mac and the commoditization of game system components means that we're seeing more and more markets for faster general purpose CPUs. This means more competition and, because of that, better funding for research.

    We may yet push ahead of the oft-misquoted version of Moore's law! :)

    • Re:Exciting times (Score:3, Informative)

      by inkfox ( 580440 )
      This story also seems like the perfect place to clarify Moore's Law. Please forgive the long post, but I'd love to see this properly understood...
      From Jargon File (4.3.0, 30 APR 2001) [jargon]:

      Moore's Law /morz law/ prov. The observation that the logic density of silicon integrated circuits has closely followed the curve (bits per square inch) = 2^(t - 1962) where t is time in years; that is, the amount of information storable on a given amount of silicon has roughly doubled every year since the technology was invented. This relation, first uttered in 1964 by semiconductor engineer Gordon Moore (who co-founded Intel four years later) held until the late 1970s, at which point the doubling period slowed to 18 months. The doubling period remained at that value through time of writing (late 1999). Moore's Law is apparently self-fulfilling. The implication is that somebody, somewhere is going to be able to build a better chip than you if you rest on your laurels, so you'd better start pushing hard on the problem.

      See also {Parkinson's Law of Data} and {Gates's Law}.

      Note that Moore's law deals with density, not performance. Note, however, that Moore did later comment that if his prediction (Moore's Law) continued to be true, computing power would rise exponentially over time, but this was a seperate observation, not a part of the original prediction.

    • Re:Exciting times (Score:2, Interesting)

      by Anonymous Coward
      > This means it's becoming more fesible to make larger and larger dies with more and more on them without significant failure rates...

      Well, actually, the IA64-1 is spec'ing at 130 watts now. That's alot of heat. Some CPUs already have clock cycle stealing termal circuits to moderate die temp. When actually called on to work, they slow down - alot.

      I'm sure, well I hope, they have the tricks to take care of this problem. I hope so, and I hope they deal with it soon. I'd really, really, hate to see my PC consume more wattage than my Reef Aquarium.

      > They're working on being able to run multiple virtual CPUs, sharing the same execution pipelines and caches, so when one thread is doing a lot of multiplication work, another may be able to use the addition and general flow pipelines.

      I saw this on the IA64 roadmap. Something of an admission that deeper pipelineing may be a diminishing return for typical program threads. But if the the throughput of virtual SMP is only 70% of a chip not in this mode, then it doesn't seem like a good way to fill all the instruction slots. Maybe plain 'ol SMP on smaller chips is a better way.

      Then, of course, there's A-SMP. With memory and celeron prices being what they are, a big boost in throughput could come with brighter system board designs. You could start coding memory op codes, like block copy and block set functions into the memory controller. You could get I2O out of the propriatary and overpriced gutter and put it to use. There are lots of things that could be done, today, to make it all go faster.
    • First, it's not SMP, it's SMT. The reason the distinction is made is because one thread among several running on the SMT CPU does not have exclusive access to the CPU's resources. Thus, while the utilization of the processor increases, the actual execution of individual threads is slower. For example, since those threads must share the same cache, unless the two threads heavily share memory, performance will decrease. This is why SMT can often show a multi-threaded program being slower than without.

      Next, the yields have held relatively constant. The increasing precision of fabrication tools is necessary to support the smaller feature sizes. Having a high yield is not an optional thing, particularly in the commodity x86 market. You (or rather, a chip designer) designs for what their technology can accomodate. Small feature size is what lets us have more on a chip, not yield.

      Moving along... No points for guessing larger caches and wider busses. And negative points for guessing that 64 bits will be a "brief stepping stone". You do realize that 2^64 is 2^32 times bigger than 2^32, right? That's big enough to directly address IBM's patent database 8,000 times over. Sure, we will probably want that much addressible memory at some point, but the time until then will not be described as "brief".

      Moving right along... Multi-threaded hardware is not a feature of mainframes. The extra efficiency of SMT is not attractive on a mainframe. Instead, having many individual CPUs that can each run a process at full speed are what is used. That, and massive I/O capabilities.

      "Commoditization of game system components" has meant "Intel chips in a game console", which is hardly fostering competition. Indeed, since "commodity" is synonymous with "x86-based" in the non-embedded markets, the general trend has been toward -less- competition in CPUs. Though AMD is doing well, HP and DEC are now gone from the field, both either becomming allied or being bought by Intel.

      If by the "oft-misquoted version of Moore's Law" you mean "performance doubles every 18 months" (rather than transistors per die), then it is possible... However nothing you mentioned does anything to cause that. Those things that are even related at all are simply the efforts of engineers to keep up the pace. Though the performance corollary of Moore's Law seems to hold true, that doesn't mean it's a "law". The general concern is not surpassing but maintaining the curve.
  • by peter_gzowski ( 465076 ) on Monday May 20, 2002 @11:00PM (#3555793) Homepage
    Come on! The story this repeats is still on the front page! Strangely it's under a different topic...
  • hmm (Score:3, Funny)

    by asv108 ( 141455 ) <asv@@@ivoss...com> on Monday May 20, 2002 @11:01PM (#3555797) Homepage Journal
    I read about this somewhere else [slashdot.org]

  • Another advanced technology which may replace ilicon in the near term is spintronics [sciam.com]. These devices have one advantage over nanotube transistors in that they may easily implement quantum computing [caltech.edu]...
  • by Anonymous Coward on Monday May 20, 2002 @11:09PM (#3555832)
    Since the IBM experiments (and others done elsewhere) almost always use single wall carbon nanotubes, there are a few issues of practical nature I wonder about with this technology.

    One is that single wall nanotubes are oxygen sensitive. Specifically, contact with O2 will cause single site defects in the nanotube structure, thus causing the whole nanotube to lose its electronic properties. It makes me wonder about how they will package these "molecular transistors" such that O2 can't get to it, but the encapsulation of the nanotube doesn't cause it to short out.

    Another is that when these things heat up, they do ignite. As we've seen with the light-based ignition shown in Science and here on slashdot, these materials do burn. The above mentioned oxygen reaction sometimes causes the semi-conducting nanotubes to become insulators, thus they heat up, ignite, and disintegrate. So I'm wondering if frying one's nanotube-based chip would be more than just a figurative term if this happened.

    Finally, there is the fabrication issue. I know that in the near future, one can make kilotons of nanotubes, and probably even kilograms of single wall nanotubes today (maybe 2kg a year, but you don't need that much if you only need 1 nanotube), but how are you going to fabricate them into architechures onto chips with existing chip fabrication technology?

    Maybe IBM has all this worked out. I do have to remember that what they've published today is what they already have covered in patents and what they've been working on already for several months to one year. They don't publish unless they've got more going on AND if they already have the technology protected.
    • The Oxidation I can not see as a big problem.... as far as Silicon goes, not to many people realize it, but Si oxidizes on contact with air thus becoming an insulator SiO2. Silicon chips are never allowed the chance to contact air with the way they are sealed/packaged. This helps to seal out oxygen, water and Sodium, some of the most notorious Si contaminants. I don't think it will be a big problem for them to extend the process to nanotubes. I admit my expertese isn't in nanotubes, though I have worked with them in the lab as field emitters.
  • What're you talkin' 'bout? You an' yer newfangled tran-sis-tors or quan-tum. Why, my punchcard programming skills alone have gotten me on THIS website, not that I'd find much useful. Until they can make a computer that can fetch me my milk bottles, I'll stick to what I have, thank you very much.
  • This could be the end of the road for processor architecture as we know it.

    We're getting to the point where we switch faster and faster, but it still takes time for signals to propagate! Something's going to have to change in the fundamental design of the processor. We're talking exceptionally deep pipelines here, with changes in our data source meaning a huuuuuuuge penalty while dozens of stages are dumped in favor of a new code/data stream. If we're to take fullest advantage of these new architectures, we're talking about structuring computing around SIMD-style programmed tunnels which execute identical operations on fat streams of data, not depending on the results for execution control. This would be akin to writing programs the way you write shaders on modern graphics hardware.

    Whether these changes can happen without fundamentally restructuring the way we program remains to be seen. But we're fast getting to the point where something's got to give if we're to take fullest advantage of these new technologies.

    Or maybe this just means cycle tuning and assembly are coming back in style. :)

    • Well, there is talk about using asynchronous sequential circuits to eliminate clock propagation problems. You also can get higher speed with lower power than clocked designs. However, asynchronous techniques present their own problems, especially with process variations. That's why I'm still a bit iffy on tackling asynch crap as my EE master's thesis.
    • The end of the road for CMOS?

      From what I understand, it's not really the end of CMOS. Their transistor seems to be MOSFET (Metal-Oxide-Semiconductor Field Effect Transistor) on carbon nanotubes instead of silicon. That would mean you could still do CMOS (Complementary MOS) with it (maybe called CN-CMOS).
    • by Anonymous Coward
      You sound like a Happy Fun Ball.

      It will take a good while before we hit any of the limits that will mean the end of the road for processor architectures. Not in the least because I have absolutely no idea what you mean by processor architecture, because that denotes a rather wide field.

      Your next paragraph about deep pinelines and SIMD and modern graphics hardware comes down to two things. First they have very little to do with eachother. Second, that these are optimizations to deal with limitations imposed by economic circumstances. If memory was as fast as L1 cache and the x86 ISA was not the dominant ISA, then we would have a whole different chip. To suggest that the parallelization made available by specialized graphics hardware and multimedia ISA extensions ignores the fact that most processing is not so easily parallelizable, especially not at the resolution offered by the forementioned optimizations.

      It is nevertheless conceivable that current economic trends will continue and that it will be found to be more economical to optimize at the chip level rather than at the bus level, but so far we have failure at both the chip level (the pain that is Itanium) and on the bus level (RAMBUS) and I believe we cannot conclude either way.
    • and it has been around for years (the second oldest programming language still in use?). There is a modern day version called K [kx.com] that will crush C based systems when full SIMD support is implemented (is already does a good job of thrashing most languages). Collection based languages used to be much bigger that they are today for some reason. Current programming paradigms go into one direction (OO) while we keep having to deal with data in the other direction (bulk operations).
      • There is a modern day version called K [kx.com] that will crush C based systems when full SIMD support is implemented (is already does a good job of thrashing most languages)
        Have you more info on K - an authoritative page? Understandably, it's a tough term to Google on.
  • by jdbo ( 35629 ) on Monday May 20, 2002 @11:19PM (#3555873)
    In that case, the Boba-Fett process must totally kick CMOS's ass!

    ...I'd say this poses a danger to IBM, except that they already have experience with surviving an "Attack of the Clones"...
  • by watashiwananashidesu ( 563740 ) on Monday May 20, 2002 @11:21PM (#3555884) Journal
    So... ten years from now, is it going to be "Carbon Valley?"

    That just doesn' thave the same ring to it ya know?
  • If you photograph this thing, it will catch on fire!
  • low power (Score:2, Interesting)

    by SlugLord ( 130081 )
    an entertaining idea, even if it simply reduces power output... I can't get the whole text, but it seems like 1 V transistors should allow faster clock speeds (especially since processor speed isn't really much related to transistor speed but rather to cooling speed)

  • Since it rivals the performance of both n and p, does that make it faster on NP-complete problems?

  • Hey, what's up with that link to the paper that requires you to be a subscriber? Is there a /. username/password for reading the content? If ordinary mortals can't read the most important link (everything else is just fluff) why post this article, especially since the same fluff reported by MSNBC was posted earlier?
  • Hey guys, I would never ask what do you smoke on the occasion of a repeated story since such a question is clearly offtopic. However, it might be interesting for the poor folks who only know (and regularly visit) Slashdot to learn from you editors: what news sites do you visit? where do you go for discussions? I mean, the only thing we know is that it isn't Slashdot.

What is research but a blind date with knowledge? -- Will Harvey

Working...