Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Single Molecule Memory 134

techtrend linked us up to a paper from Mark Reed and James Tour on single molecule memory which, if it comes about will pretty much make space irrelevant. They say the technology is 3-5 years off.
This discussion has been archived. No new comments can be posted.

Single Molecule Memory

Comments Filter:
  • But so what? I think that we "Will" have molecular computers sometime soon, but even if we had them TODAY, they would take a long time to ride the curve past our current silicon setups. Our computers are this fast because we have gotten really good and making them, it will take time to build a comupter based upon a new design theory that is faster/cheaper/more powerfull than available silicon. I'll probably be out of grad school (which I am a year or two away from entering) before I see one.

    Some one should probably make some crack about running beowulf on molecular computers... :)

    -Crutcher
  • ...which is what quantum computing is all about. (Well, that and the neat electron tricks you can play with that encoding.)
  • Thats true but computer technology hasn't improved drasically. Computers are basically the same. The have faster processors, faster bus speeds, more memory, bigger screens, etc. See what I mean?
    A few exeptions may be the advent of simple thinks like USB2, firewire, copper process, silicon on insulator, high density flash memory...

    This molecular based memory sounds great but when will we see any of this in practice? But we can dream.
    --
  • An interesting paper from August titled Ultimate Physical Limits to Computation by Seth Lloyd of MIT has been submitted to Nature. The abstract and article in TeX, PS and more can be found at http://xxx.lanl.gov/abs/quant-ph/9908043 [lanl.gov] .
    It's a math heavy, but still accessible to the lay audience, and a fun read insomuch as Lloyd goes so far as to talk about outlandish theoreticals as black hole computers.
  • I'm unsure of the *quality* of manufactured diamonds, but we do have the ability to make 'sheet' diamond atop metal filaments.

    It's essentially, under high pressure, using a tungsten heating element run under some methane gasses in a non-combustible environment(nitrogen and stuff), and diamond will start to grow upon the surface.

    I may have a few technical details wrong, but I think that's the process.

    -AS
  • I worked with Professor Mark Reed from 1993-1995, and I can definitively say that he is (a) completely on top of the field, and (b) is probably closer than any other researcher to making this stuff work.

    The problem is that most of the really exciting research results don't work above liquid nitrogen temperatures, and some don't even work above liquid HELIUM (4K) temperatures!

    But I actually saw the quantum dot working, and helped perform some of the analysis of it (on some good old VAX hardware!) I also helped construct a custom I-V trace unit which used a wiggle voltage to produce better curve traces of the results. Some of these novel quantum semiconductor devices (see, for instance, the I-V trace of this one [yale.edu] are actually capable of operating in more than just one single state -- the multiple plateaus in the 9 T graph show that this device can operate as a trinary logic device if you know what you're doing. Then again, it requires a 9 Tesla field to bring out these characteristics...

    As I've said before on /., we need to solve the temperature and interconnect issues. Interconnect may have a new solution, per that article on molecular computing posted a few days ago here. Our materials science friends, though, need to keep making progress on materials which possess these unique characteristics at room temperature.

  • well by 'technology' I really mean 'fab technology' and 'design technology' ..... how do you build lots of them cheaply? how do you design them? (CAD tools are expensive and existing designers have a lot of money and mindshare invested in what they are using today) etc etc

    Being able to build a few gates in the lab is one thing .... getting it to a mass market and cheaper than the existing technologies with all the momentum they have going for them is really hard. Not impossible mind you ..... just harder than most people think

  • Oops -- forgot to mention, this is NOT 3-5 years off. Rather, this shows what technology will be HOT in the research community 3-5 years from now. Materials and such still must advance by a few leaps and bounds before this will show up on your desktop.
  • I'd think the molecules would probably do something like changing its shape in response to voltage...if you're thinking it's mayb a chemical process like in the human nerve system, I doubt that's the case - that'd be pretty slow
  • "With the single molecule memory, all a general-purpose ultimate molecular computer now needs is a reversible single molecule switch," said Reed. "I anticipate we will see a demonstration of one very soon."

    In other words, right now they can change the "memory" state of the molecule from 0 to 1 once, but they can't tell it to flip back, right? They need to develop the switch. And after that, they'll probably need to develop a general purpose way to read the memory (and write, of course). And it will probably take time to integrate this stuff with existing electronics.

    Don't get me wrong, this is definitely exciting stuff! But don't expect to see the "general-purpose ultimate molecular computer" anytime soon.

    --
    Ernest MacDougal Campbell III / NIC Handle: EMC3

  • ummm, just how is this post offtopic?

    Just curious, seeing as how its a memory related post to an article on memory.


  • I think it would be more like, "I'd like 2 mole-bytes of memory please."

    Clerk hands over a small flask, "That'll be five bucks."

    If/when this comes to fruition, people will talk about today's paltry storage capacities the same way we currently talk about those "old 1K machines". Of course the downside is you'll probably need 500 or 600 mole-bytes to run everyone's favorite Redmond graphical shell.

    (for those who don't remember their chemistry, a mole is 6.02e23 "things", i.e., molecules, atoms, minutes until class is over.... Compare that with a terabyte, which is roughly 1e12 bytes, and you're talking vast increases in capacity. As for how much is in a mole, take 18 mL of water and drink it. You've just swallowed a mole of water...feel full?)

    Manipulating anything on this vast a scale, in terms of individually accessing each molecule, boggles the mind. But it's sure going to be fun.

    Mmm.....years and years of MP3s stored in memory...

  • by jd ( 1658 )
    Jam yesterday, jam tomorrow, but never yottabit memory systems today.

    I agree that "speculative science" usually ends up yesterday's "science fiction" and last week's "mad ramblings". On the other hand, no speculation, no progress. Without trying to reach forward, people have a habit of sliding backwards.

  • I thought it was exabyte
  • by decaym ( 12155 ) on Wednesday November 03, 1999 @10:51AM (#1565780) Homepage
    Actually, it's "peta" which is then followed by "exa" and then "zetta" and finally "yotta". There is a short breakdown of the meanin of the names at "http://www.ccsf.caltech.edu/~roy /dataquan/ety.html [caltech.edu]" which tells where the names come from. There is a much larger listing of magnitudes at "http://www.mcs.csuhaywa rd.edu/~malek/Mathlinks/Billion.html [csuhayward.edu]".
  • hey, thanks man. Maybe next time I should post yet another joke about cryptography export laws. Somehow that's always "funny" and never "offtopic"... I thought the same was true of Bill Gates jokes, but I guess not.
  • All the cool technology is 3-5 years off. In 5 years it'll still be 3-5 years off.
    --
  • for anybody.
  • The article doesn't actually state that the tech is 3-5 years off, just that they are discussing where the tech will be in 3-5 years. Something like this seems like it would be a little farther off than that.

    Still, I'd like to see virtuall unlimited memory before I'm thirty. I wonder how that'd effect programs? No more memory management?
  • I see what you're saying, but I don't think it's very fair to say it - I think 3-5 years ago; technology has certainly improved since then.

    "Then I'll tell the truth. We're allowed to do that in emergencies."

  • by Nathaniel ( 2984 ) on Wednesday November 03, 1999 @10:19AM (#1565786)
    It simply talks about the concept, and says "Papers presented at the International Electron Devices Meeting ... give important clues about where electronics technology will be three-to-five years from now."

    But nothing in the article says that anything in this particular paper will be implimented any time soon.

  • by Anonymous Coward
    Chemistry is rife with systems that
    can adopt two states, be they conformations,
    oxidation states, spin states, whatever.
    Making a bit has never been a problem.
    The trick is reading and writing.
  • by Forward The Light Br ( 21092 ) on Wednesday November 03, 1999 @10:19AM (#1565788)
    this would be cool, except:

    we still dont have a decent way to transfer information through 1 molecule sized pathways...
    (keep in mind, they have flipped the gate, and watched with a microscope... they did not do anything useful with it)

    the gate/transistor can be that small, but if the path to get there is not, who cares

    (electricity can't work well at that size, if the pathways are that small and at all decently near each other you will get massive electron tunneling, where they hop over to the next pathway ) (this is bad ;-)

    optical pathways ahve not been gotten to work yet AFAIK, and even they would have problems at that level

    on a more holistic level, fusion was supposed to be done 20 years ago, those incredibly large harddrives that are the size of my pinky were supposed to be done by now....


    this is cool and all, but it is research that will not bear fruit for a LOOONG time

    -RS
    We are all in the gutter, but some of us are looking at the stars --Oscar Wilde
  • Electron spins would be a decent place to store bits...

    The ultimate reeally should be not storing bits. Seriously, whatever happened to fuzzy logic. I know that all conventional logic still holds true under the fuzzy rule sets so all software could be emulated.

    For example::
    -Analog music sounds better than sampled music
    -People don't really use 1 bit logic in everyday life.
    -fuzzy machines perform much better in real world tasks than traditional logic

    Where are the attempts at hardware fuzzy logic? I know all of the obsticles in voltage regulation are staggering but.... you would think that there would be more research. The main reason we use the binary number system is to emulate a switch. I would like to see how fast a variable switch processor would be.

    -Pos

    The truth is more important than the facts.
  • A petabyte, then a hexabyte...and some other things that haven't been settled yet. Check out Data Powers of Ten [caltech.edu] for more info, and a great comparison of scale.







    -lx
  • remember Intel and Microsoft have nothing to do w/ this so it might acutally be done in 3-5 years
  • The article skimps on the important details quite a bit. What kind of molecule are we talking about, here? O2? Hemoglobin? I mean, there isn't really any size limit to how big a molecule can be, so might not even be all that interesting.
  • Just for the record, spin numbers are -1/2 and +1/2, not 0 and 1.

  • Maybe it hasn't improved drastically, but I don't think this molecular based memory would improve anything drastically either.

    But I agree with you otherwise, this isn't going to change any of our lives significantly.

    "Then I'll tell the truth. We're allowed to do that in emergencies."

  • 1) How fast can a computation happen (in a physical system) in theory?
    A: The answer depends on the physical system. There is no hard theoretical limit on computation speed (only unreliable estimates based on current and developing technologies).

    2) How fast could molecular gates and molecular bits effect a computation?
    A: The answer depends on the way these gates work. Who knows? Maybe they will use quantum tunneling to have a gate delay shorter than the time it would take light to pass through the space the gate occupies.

    3) How do you estimate these numbers would translate into Teraflops?
    A: Teraflops are meaningless out of context, and they are highly dependent on design. There is no limit to the teraflop rating of something made with current technology if you allow SMP.

    Sorry, but the questions are just not answerable.
  • we still dont have a decent way to transfer information through 1 molecule sized pathways...

    The materials being worked on are organic (rotaxane, etc.) which, of course, involve carbon. Hence, some of the advances over the last few years (carbon nanotubules or "Buckytubes", etc.) in carbon nanostructures would probably lend themselves nicely to the effort.

    Kythe
    (Remove "x"'s from

  • If you had a super-computer the size of a mitochondria, where would you keep it?

    "Nobody move, I dropped my Cray!"

    Even if memory could be made on the molecular level, and processors could flip electrons instead of bits, do you all think we could afford the Scanning-Tunneling microscopes we'd need for I/O? I mean, hell! I like having a big monitor. There's no way to plug it into a sugar-cube computer..

    Every time you sneeze, you'd have to get new hardware.
  • by Anonymous Coward
    Here's a formula they forgot to teach us in university: Dilbert's "Salary Theorem" states that "Engineers and scientists can never earn as much as business executives and sales people." This theorem can now be supported by a mathematical equation based on the following two postulates: Postulate 1: Knowledge is Power. Postulate 2: Time is Money. As every engineer knows: Power = Work / Time Since: Knowledge = Power Time = Money Knowledge = Work/Money. Solving for Money, we get: Money = Work / Knowledge. Thus, as Knowledge approaches zero, Money approaches infinity, regardless of the amount of work done. Conclusion: The less you know, the more you make.
  • Normal computers will never see the advantages of molecular memory. The big companies will charge billions to license its usage, and the korean and taiwanese will charge more when they actually get around to making RAM based on it. "Molecular Memory" ooooh molecular, that's gotta be a word worth at least $4/kilobyte! If you don't believe me, then try buying 128mb of L2 cache RAM. Imagine how expensive and rare that is.. then imagine how expensive it'll be to get a 'molecular memory' "MRAM" chip. Start saving up, kiddies!
  • by fremen ( 33537 ) on Wednesday November 03, 1999 @11:04AM (#1565804)
    You can see what the Tour research group is up to at Rice by going to his homepage at http://www.jmtour.com/ [jmtour.com]. There is information about this project at http://www.jmtour.com/info.htm [jmtour.com]. Scroll down the page a bit.

    Finally, don't forget that you can see more about the Rice nanotechnology program at The Rice Quantum Institute [rice.edu] and The Center for Nanoscale Science and Technology [rice.edu]. Don't forget that Rice is where the Buckyball craze started, with Smalley and Curl winning the Nobel for the discovery of its shape.
  • MicroSoft will release a new version of Windows that will be twice as infinite :)
  • According to my understanding of this diagram [aip.org] from Physics News, the "self-assembly" process involves tiny little elves with Santa Claus hats. They seem to drag a conductor over a gap in some sort of substrate.

    I'M NOT KIDDING! Take a look at the picture [aip.org]! However, this may only relate to his earlier work on molecular wires, but I assume, once you have figured out how to control the elves, the rest is rather straight forward.

  • First off, spin is imaginary anyway. And molecules can have more than two oxidation states. Suppose it's eight? One byte per molecule instead of one bit? I don't know, the article didn't give any real details on how any of this actually works. I sure would like to know how they are going to read and write to one molecule.
  • I dont know if anyone else caught this, but the most interesting thing about this article is the fact that it mentions the memory was created by "self assembly" This is a major step towards nanotechnology. Weve finally started building things from the ground up as it were, rather than from the top down. Regardless of whether or not this memory catches on, the fact that it was literaly grown will revolutionize the industry.

  • They should hook up with the guys at HP who are working on molecular logic gates and carbon nanotube interconnects... I think that would be ultra-spiffy =:-)
  • IIRC, there is actually a theoretical minimum on the energy needed to flip a bit. Assuming limitless energy input, the limiting factor becomes how fast you can remove waste heat.

    I have no numbers, but I think there's a section in Applied Cryptography that discusses some theoretical maximums of computing power. My copy's at home - anyone have theirs handy?

  • That's right. Lets look at the evidence about memory and speed.

    Microsoft Office vs Sun's Star Office

    uh wait...

    Internet Explorer vs Netscape Navigator vs Hot Java

    uh wait...

    VB/VC/J++ vs Sun's Java

    uh wait...

    never mind.
  • Accually it takes approximatly 2 petrabytes from what I've read to store all sensery inputs and all other memory (the things that we thought but didn't nessesarly have inputted) for a human for an average lifespan. 67
    assuming 1 bit per molecule 6.02e23/2^50/8=
    about 67 million
    so you could store 67 million lifespans. APPROXIMATLY
    in a mole...... shrugs.. thats not that much.. really windows 2050 will take atleast that. (even then.. imagine the load time.. though I guess you would buy memory preloaded with windows just never let it loose its power... assuming of course it works like that..)

  • exactly what i thought when i read the article. this is a trend at slashdot lately, by the way. since the article is only about one page long, i can only assume that mr. taco dosen't really even try to read the articles before coming up with a catchy blurb that invariably gets corrected by slashdot readers in the comments section. what's up guys, maybe you can try a little harder...

    sh_
  • Yeah, but it's probably be able to do everything before you think about. Recognise you and your dog's voice etc.

    Also, why don't you try listing requirements for Solaris 12 or Linux 10.9.5.3.1 and X12R3..or even better, Java, StarOffice, Netscape.
  • I saw many Science articles on molecular memory and computers in 1993. Seems whenever grant money for this thing dries up, people start calling portals and TV stations saying they're on the verge of a magor breakthrough. Why are they only on the verge of major breakthroughs when their grants run out?
  • I just assume that memory/speed per dollar will continue to improve at the same rate as it has for the last 30+ years. Articles like this simply show us how technology will look past the point where silicon fails. With respect to Electron Tunneling, is there any reason sufficient error correction codes could not be used to overcome problems of this type? Why not use 10 molecules per bit! JJ
  • This is old hat. Why, I already have a single molecule memory. It's even... Now what was I saying? Oh yeah, that slugbot is really neat!
  • AMD's Fab 30 is ready to go and we should see copper athlons next year
  • Imagine if we could do it.
    If the following conditions were met:

    1. Sufficent Error Correction to over come things like electron tunneling and interference. (10 molecules per bit or whatever).
    2. We had a decent way to interface to such memory and avoid interference.
    3. It opereated at a speed comperable to current or future silicon RAM.

    Then using the space current hard drives take up we could ahve a storage device holding petabytes of memory that is non volatile. Magnetic media becomes virtually useless. 1 Petabyte for nonvolatile storage and 1 petabyte for working space. No mechanical disk subsystem and IO speeds go completely through the roof. I for one would like to see it happen. You could scrub through a huge database with a processor slower that today's and it would still be faster once disk I/O is removed from the equation.

    I would only have the following issues which would need to be addressed.

    1. Microsoft style programmers could write even sloppier code as memory leaks would become virtually unnoticable unless they were huge.

    2. Petabyte sounds like some wierd porkemon (yes thats PORK), character and I despise all these *mons that are coming out. All I hear out of my kids these days I WANT PIKACHEW. Five years from now it will be I WANT THE PETABYTE PIKACHEWBACCA!!
    Help me please!!!


  • Spin is imaginary? I hope you don't mean to say it's an imagined property. It may not be a directly physical property (you can tell just by looking at it), but it is definately there.
  • I remember back in high school the "hot new thing" being that since silicon was going to be too slow, computers would Real Soon Now be based on light rather than electricity, and we'd be doing switching based on diamonds rather than transistors.

    This made it sufficiently into public consciousness that Michael Crichton's book, Congo, had this as the "plot point" behind the search for "uniquely pure" diamonds.

    19 years have now passed since the book was written; computers are not yet based on diamonds.

    Thinking back only to 1998, IBM announced that PPC chips that used a copper-based production technique would provide massive performance increases; it is not clear that this is yet being deployed in present PPC systems.

    I suspect that "single molecule" memory elements are more than 3-5 years away.

  • There is no ultimate. As long as computers run slower and store less data than reality itself we have at least one step more. Anybody who stops and holds up a sign proclaiming they've found the ultimate smallest piece or similar bullshit is as stupid as those religious nuts that hold up signs proclaiming 'Repent, the End is Near'.

    My question is if we find a way to compute more than reality itself then can we jump to a higher plane of being? :)
  • This will be a great technology to link with MEMS [sandia.gov]

    Just add the transistors and the dream of powerful (like ~286/386 powered, or at least dragonball(palm v)) smart dust [berkeley.edu] will be a reality. (or smart cereal, just think, your daily internal diagnostic exam could happen over breakfast)

  • Holy shit, I practically pissed myself when I saw that. I wonder where they got the elves? I would think Smally would use these little guys in his buckytube research. Now the only question for building computer is: what are the hourly rates for the little fellers.
  • I love irony , Microsoft is the reason I have a computer with 256MB of ram HEHEHEHEHEHE
  • Oh yeah, and then of course by that time Word will take 95 petabytes just to load if it was installed with the 'minimum install, no fonts' option ... Sorry, I couldn't resist.
    --
  • How big is this single molecule anyway? It could be conceivably any size! A quick search of the Protein Data Bank (yes I was bored, but there's a fairly interesting mail about it [rcsb.org]) and it turns out that some molecules are 132,000 atoms big (or even bigger). You could probably build a whole cpu out of something that big...

  • Hey at least it's not as bad as the phone company where the government gives them money and then makes US pay the government back . It pisses me off when I think about it
  • Spin is mearly a graphical image, used to help you visualize a force that has no real analouge in the macroscopic universe. "Spin" is a fiction, but so far as modern science goes, the force/ide behind it is real.
  • My roommate had this mentioned in his electrical computing class at Rice today.
    The prof printed out copies for everyone.
    You can find YOUR copy at:

    http://www.nyt imes.com/library/tech/99/11/biztech/articles/01nan o.html [nytimes.com]

    Usual free registration/login for nytimes.
  • by mouseman ( 54425 ) on Wednesday November 03, 1999 @05:52PM (#1565842) Homepage
    There seems to be a great deal of misunderstanding about quantum computing, both in this post and in the replies to it (and on /. in general). The key idea behind quantum computing is to use "quantum superposition" to effectively perform many computations in parallel -- without requiring parallel hardware.

    It is not just a single bit that can be in a superposition of states, but all the bits of the computation. (A superposition of states can be described as probability distribution over all the possible states the system could be in). Thus, the limit on the number of parallel computations in a binary quantum computer is 2^N, where N is the number of quantum bits (qubits) used in the computation.

    The degree of parallelism this implies is staggering. Some problems that are believed to require exponential time to solve on a classical computer could be solved in polynomial time on a quantum computer. This includes factoring (think RSA encryption).

    Some people overgeneralize and think that a quantum computer could solve NP-complete problems in polynomial time. Unfortunately, that's not the case (or at least, hasn't been proven). To get an answer out of a quantum computer, you need to be able to get all of the exponentially many wrong solutions to somehow "cancel out," leaving the correct solution. Doing that in the general case is non-trivial and probably impossible. Quantum compilers are a long way off.

    But so far, quantum computers have proven difficult to build. The problem is getting a useful computation without a "collapse" of the wave function. (A collapse is when the system rolls the dice or whatever it does, and settles on a single state to be in. Oops, there goes your parallelism!) The biggest quantum computer I've heard about has 2 qubits. An impressive achievement, but not quite ready to port Linux to.

  • by goon ( 2774 )
    all a general-purpose ultimate molecular computer now needs is a reversible single molecule switch
    to a point this has already been acheived. I was reading in new scientist about qubits and quantum computing.

    It works like this... using an electron spin or photon polarisation, qubits allow you to represent both 1 and 0 at the same time. Using the physical properties of electon spins you can generate different states (00, 01, 11 etc). Read the full article here...

    http://www.newscientist.com/nsplus/insight/quantum /48.html
    http://www.newscientist.com/ns/980418/nquantumcomp uter.html
  • The key word here is "probably".
    Noone has gotten nanotubes to self
    assemble, and more importantly
    research into defects in nanotubes is
    only beginning. Electromigration
    is alsways an issue for ULSI.
    And then of course cross-talk in the
    form of tunneling or scattering is
    an issue for these materials as well.
    Lastly, (this is not a joke) at those
    scales, information carriers have to travel
    very fast, so fast in fact that special
    relativity limitations become fundamental.
    My guess is that it will be easy to get to
    100 nm scale, hard to get to 10 nm scale
    and virtually impossible to get to 1 nm.
    I doubt we will move beyond 0.13 - 0.10 micron
    technology in the next ten years.
  • When the amount of memory available to a computer becomes effectively infinite? If each PC can hold the contents of the entire Internet, Library of Congress, etc. and still have room for more, we will effectively have access to any information we care to store at our fingertips. Think library computer from Star Trek on steroids...
  • It just won't be cool anymore by then.

    Interesting idea that you can control a single molecule. But can it be done fast? is it expensive? If not it won't be any good for hardrives or internal memory. The article doesn't go into detail on these issues.

    Will it beat holographic storage and other interesting techniques?

    Interesting times are ahead of us. I could have said that anytime during the last 100 years or so and I hope I can keep saying it for quite some time.
  • Although I agree with the "Figures" poster that the 3-5 years always gets pushed back 3-5 years there seems to be an awful lot of tiny-device stuff breaking out these days, I think perhaps a change is at least somewhere in the next 10-20 years.

    I do know that with the immense growth of databases such technology would seriously kick ass. You could run mega multiple instances of DB's on a single server which isn't popular now because of resident memory size (well - unless you have a crapload of memory).

    I wanna know about disks though. I keep reading stuff about nanotechnology driven processors and now molecule memory - what is happening with disc technology, or rather semi-permanant technology like disks?

  • So is the government going to start subsidizing chip makers in the future?

    Hell, they pay the farmers to not produce certain crops in order to maintain higher prices for all. Something I think is so silly.

  • Doesn't matter how much space you give someone, they can fill it. I mean, think of all the things we don't store on hard drives. Once we could, I imagine we could fill something up pretty fast. I mean, now that DVDs have been cracked (a few stories back [slashdot.org]), you could store DVD images on your computer... If drives were cheap enough, you'd buy a few extra and just back up to nonremovable disks now and again... There's always ways to fill up space. I doubt this will help the end user as much as we'd like it to. I mean, if each person has 200 terabytes (to pick a random large number) the next Windows or Office release will probably be 150 of those...

    Now, for other purposes than individual users, this has interesting implications in terms of computational capabilities, especially if this memory can become quite fast.

  • But give it two years after the technology becomes widely available. This will make possible programs that take up a huge amount of memory, and therefore space may once again become relevant.
    Have you seen the size of an average computer decrease anytime in the past ten years? I keep my 300 mhz AMD K6 with 12 GB of space in a box that used to belong to a 386. If something takes up less space, all this means is that they'll be able to stick more of it in the same case.
  • The next cool thing is always 3-5 years off. Until we can use it in the real world, it's just an idea in some guys head.
  • by Anonymous Coward
    No, molecular is not the ultimate. As my physics teacher of a decade ago suggested, the ultimate is storing bits in electron spins. Up spin=1, Down spin=0.
  • Who needs hard drives ;-) we can store all of those cracked DVD's in memory now (of our BSD laptops), and with ultra-high-speed-broadband-or-whatever transmission, we can send 6GB of data in no time flat. In fact, Echelon would have even more infinite stores, and would be able to capture all of this, and (as long as our quantum encryption is successful) be the worlds largest holder of junk mail. Now if they could only combine the robofly with VanEck phreaking, that'd be cool...
  • ...if these guys [angstromtools.com] do what they say they will.
  • If I remember correctly, they were seeking uniquely impure diamonds. Essentially, they wanted diamonds that were pre-doped with a certain element that turned them blue and made them a good substrate for circuits.
  • You're probably aware of this, but there is a hard physical limit on how fast any physical computation could happen, though that limit might never be reached (due to other factors that would come into play before then). That hard limit comes from the Heisenberg uncertainty principle -- one way of stating this is, roughly, (delta Energy) * (delta Time) > h (Planck's constant). This is a fundamental property of the universe as we (think we) know it, and I can't think of a way that any computational device could get around it. That is, if you're switching something within a particular energy band, there is a hard limit on the timeframe it will take to do that, given by the above equation.

    Hope that helps (?). I'm not really qualified to answer the other parts of your question, so I won't. :-)

  • by Anonymous Coward
    IIRC, spin is not a force, but a state. We call it spin because particles containing spin behave according to the rules of (quantized) angular momentum. Classical systems which also possess angular momentum happen to turn about an axis, making spin a useful analogy.

    The problem comes when people push the analogy too far and assume that spin means everything at the quantum level that it does at the classical level. For some reason I don't see anyone having the same conceptual difficulty when discussing other states such as charm or strangeness, but maybe that's due to the lack of a classical analog.




  • by Anonymous Shepherd ( 17338 ) on Wednesday November 03, 1999 @12:46PM (#1565859) Homepage
    You've assumed that molecules are the lowest level at which we can compute;

    We can go smaller, into atoms and energy and spin states of electrons in a shell, for example, both of which are different things entirely. So we haven't quite hit the limits of information and computing yet.

    So lets say we use a stable lithium ion as a storage 'bit' where we can flip the electron's spin to indicate 1 or 0. We ignore the two inner s orbital electrons and concentrate on the single valence electron. You'd prolly flip it with a single photon of light. How? Beats me. Anyway, you can actually calculate the energy of the photon required to do so, and the time it takes to flip as well( ~instantaneous?) and that is some sort of limit, but there are still levels beyond that with which we could play information games, I'm sure.

    You could go into multi-bit storage by including energy level as well as spin; bump up an electron 1, 2, 3 or 4 levels, and flip it's spin in either direction and we get a 3 bit storage out of a single atom. If you play with two electrons in such a system, you could conceivably get a 5 bit system, or something like that. With a complex enough atom, you could prolly get 6 or 7 bits of data off a single atom!


    -AS
  • I may be mistaken, but I thought the iBook used a Cu process G3, and the G4 was also a copper process?

    Both are already in the market, with more on the way with future G3 PowerBooks, and quite possibly even SOI and Cu based G3s and G4s.

    Computers are not yet based on diamonds because they don't provide any performance improvement over silicon, over the past 19 years. They are definitely part of the research on optical computing, but silicon, in theory, still has another 8 to 10 years of life still at which point an alternative technology may take over. Like optical.


    -AS
  • I am actually somewhat an expert in this subject as I have been doing research on molecular electronics for about two years ago. I have been to most of the conferences so far on the subject. (All this reseach is being funded by DARPA) Anyway, the significance of this research is that it involves passing electrical current through molecules, not just a two-state system created by structural conformations. Although I do not have the specific details on this recent experiment, I know that the past work of Reed has involved synthesizing a molecular structure and then testing its electrical properties (I-V, C, etc.) by using a scanning tunneling microscope (STM) tip to apply varying voltages across the molecule and then measuring the results.

    My guess is that they have fabricated a molecule with a high capacitance that can store charge in a similar fashion as conventional DRAM. However, such a molecule cannot be used in a memory array until a switch (molecular-sized transistor) can be fabricated. That should come soon. However, the above is only speculation on my part.

    If you want some good introductory information on molecular electronics in general including both memory, switches, and higher level logic architecures (AND, OR, XOR, etc.) in molecules, download this paper:

    "Architecture s for Molecular Electronic Computers" [mitre.org] by James Ellenbogen and J. Christopher Love

    The research in that paper was performed at the MITRE Corporation, which is also in the process of developing molecular electronic architectures. I contributed a large portion of the computational data to the above paper.
  • Just a small point, but "Self-Assembly" is a massively overused buzzword.

    Self assembly of "macromolecules" has existed for over half a century. (Examples: nylon, teflon, etc)

    Also, I haven't seen anything which mentions the SIZE of any of these molecules. Are we talking about molecules with 20-80 atoms, or are we talking about protein sized monsters. Storing one bit per protein would be an enormous waste of space. The value of this technology really depends on how big the molecules are.

  • ummmm.... L2 cache ram is more expensive because it costs more to make, not because they gave it some spiffed up name. The whole idea of having layers of cache before main memory is accessed is to reduce system cost, because fast (low latency) memory is very expensive. I believe the main reason is that in order to maintain fast access times with low latency many many transistors must be used for each bit. More transistors == Higher Price.
  • As someone has already pointed out, they're still looking for a molecule that will act as a switch, never mind implenting it in any practical way.

    Even if we end up with individual memory cells 1 molecule big, we still need to design circuits on about the same scale, not much point having some ultra dense memory array when you don't have an efficient way of connecting to it...

    I'm guessing it'll ve using some really low voltages, so shielding out interferance would be tricky.
  • Thank you for your informitive post. But...

    Could you possibly give us some answers to my original questions (speed, availability, etc.)? And perhaps tell us your opinion of the viability of this technology? Not just "Can it work?" but "Can it compete against hologramatic storage and other new technologies coming down the pike?"

    Anyone else remember Magnetic Bubble Storage?

    Jack

  • Read "Fuzzy Logic" by Daniel McNeill and Paul Freiberger. As well as a good description of fuzzy logic itself, they mention some of the history, including info on fuzzy logic hardware.

    The book says that, at the 1987 2nd annual International Fuzzy Systems Association conference, "Dr. Hirota of Hosie University displayed a fuzzy robot arm that played two dimensional ping-pong in real time".

    Also at that conference, Takeshi Yamakawa demonstrated a fuzzy system that balanced an inverted pendulum. He got it to balance a flower pot on top of the pole, too. Keep in mind that this was 1987, so digital computers were not fast enough to have that good balance!

    even more amazing,

    As he was exhibiting the inverted pendulum, one spectator asked him to remove a board from the fuzzy computer. "I thought that was incredible, crazy," Yamakawa says. If he detached a board, the pole would drop at once. But the man persisted, saying he wished to see exactly how it would fall. So Yamakawa disconnected a board and, to the surprise of everyone, the pole remained upright. The controller continued to regulate it. This serendipitous demo showed the graceful degradation of fuzzy systems, their ability to make decisions even with partial information.

    Now that is cool :) The book also mentions that some Japanese companies are using fuzzy logic in their washing machines. ( while( dirty(water) ) wash(); ) This uses less soap and energy than conventional stuff.

    #define X(x,y) x##y

  • Here is a list [nec.com] I found with references to the fundamental limits on information storage and retrieval. The limits discussed in these references are thermodynamic limits imposed on the system. I'm pretty sure Feynman also discussed this (I found this list by searching on: Feynman computing theoretical maximum.

    These are the currently understood maximums for info storage and retrieval, whether these will ever be attainable is a different question but this is what your question 1 asked.
  • Try to watch the rerun of the South Park that was on tonight (Wed. Nov.4) they really blast the whole Pokemon craze.
  • Diamond is a far superior material for chips, depending on how you look at it. The real problem is that it is too hard to make high quality diamond. Impossible, really. It would be nice stuff to have around, if it could be made. The Thermal conductivity is much higher, the stiffness is higher, the bandgap larger, ect. But no one can make it. The reasearch has been mostly abandoned, at this point--much like the High Tc superconductors...
  • I have often thought the universe is one big memory storage device, and that the moment you know all information about something you will need just as much space as the universe to store it.

    Okay, so who's got a good compression system for reality itself?

    Run-length encoding looks pretty good for the vast emptyness of space, to me. :-)

    Or alternatively, are weird quantum effects the result of reality itself being stored using lossy compression?


    --
    This comment was brought to you by And Clover.
  • bigger. a lot bigger. there's quite a lot of molecules that you can see without any visual aid. There's quite a few molecules that I wouldn't want to fall on my head. remember bakelite, the stuff old telephones used to be made of? well.. the casing is one big molecule. a car-tire? one big molecule. Molecular says nothing about size. That's polymers for you..

    //rdj
  • We actually know a lot more about this technology already than you might think. The basis of this technology is that these individual molecules are electrically conductive. They are designed to work just as traditional semiconductor devices do, except that they aren't semiconductors, they are molecules. We have to do some slightly different trickery to get the current-voltage behavior that we want, but the same principles of electrical engineering still apply.

    If you are curious about the "bigger picture" of molecular electronics, this paper should answer your questions:

    "Architecture 's for Molecular Electronic Computers" [mitre.org] by James Ellenbogen and J. Chistopher Love

    I actually contributed a large portion of the computational results to the above paper. See my post in the next thread for some more information.
  • yup - a molecule does not a technology make - any more than 'silicon' without the manufacturing processes to etch it, dope it, drop metal and insolators on it is just refined sand

    What is needed is an infrastructure that goes on top of this .... how do you put the molecules into the states you want, how do you sense their state, how do you get that information to the outside world.

    And once you get it out what do you do with it? put it into a molecular computer? .... then probably you're using the same technologies you fabbed the ram with .... put it into a 'traditional' silicon computer? then you need a whole other bunch of technologies/infrastructure that allows these molecular structures to be fabbed alongside silicon ones ....

    In other words there's a lot of work to be done!

    It'll happen one day .... probably not next week, or next year

    Oh yeah and noise/cosmic rays/quantum effects etc etc you have to be able to handle all those other things that can cause these molecules to change state when you don't want them to - with ram you can do heavy ECC and scrubbing to get reliability .... for random logic in, for example, a CPU it's a whole different, and much harder, problem

  • by Jack William Bell ( 84469 ) on Wednesday November 03, 1999 @10:30AM (#1565876) Homepage Journal

    I read the article and came up with more questions than answers... How does it work? What are the 'off' and 'on' states? How do you read/write it? How fast can you cycle it?

    I followed the link from the article to 'Mark A. Reed', one of the scientists mentioned. A quote from his personal site [yale.edu] (deep breath): "My areas of research are quantum electron device physics; tunneling and transport phenomena in semiconductor heterojunction and nanostructured systems; reduced dimensionality effects in nanostructures; resonant tunneling transistors, circuits, and novel heterojunction devices; investigations into the physics and technology of quantum-confined electronic devices; investigation of resonant tunneling physics in a variety of heterojunction systems and materials, including 0D quantum dots and resonant tunneling transistors; and molecular electronics, nanotechnology."

    ...wheeze...

    OK, now I know as much as I did before, and am buzz-worded to death besides. So I drilled deeper into the site and found some pictures of his current work [yale.edu] that do give some clues. Most interesting is the illustration titled "Molecules in nanopores."

    And, of course, there is his List of Publications [yale.edu] which I probably wouldn't understand anyway. Even if they were online... Perhaps someone more competent can read these, and peruse the 'Break Junction Lab' [yale.edu] description for us.

    My take at this point is: the guy probably knows what he is talking about, but I still don't have enough information to determine if the end result would work well enough to actually be useful in '3 to 5 years'. The thing is, there are plenty of technologies that work. But only a few of them have survived the true test of fitness in the marketplace.

    Jack

  • by gad_zuki! ( 70830 ) on Wednesday November 03, 1999 @10:31AM (#1565877)
    Is that a SingMolec Module in your pocket containing all Human Knowledge or are you happy to see me?
  • No matter how much space you have M$ will allways find a way to fill it AND make your computer run really slow.
    maybe it would be a better idea to
    a) all switch to linux
    b) learn microsoft something about not wasting memory

    ---
  • It will certainly need more than one molecule to store one bit since the flipping the bit due to radiation (cosmic or otherwise) would probably be pretty easy with single molecule memory. It's a real problem with memory today. So, they would probably need triple redundancy - or three bits.

    Then there's the problem of wiring up this memory, addressing it, and there's no discussion performance versus current memories.

    Also, any speculation of having working implementations in the next few years needs to do a reality check versus the real time it takes to investigate new technology.
  • by blazer1024 ( 72405 ) on Wednesday November 03, 1999 @10:37AM (#1565893)
    I know it may not happen for a long time, but imagine this:

    You walk into your local PC parts store.

    "I want 96 petabytes of memory, please."

    "That's it? That will be $6.28."
  • by DerMarlboro ( 64469 ) on Wednesday November 03, 1999 @10:41AM (#1565895)
    It seems that as we start thinking of molecules as bits, we're getting down near the theoretical limits of information and computing. So that brings me to a few questions that I hope some knowledgable person (someone with a background in physics and computation) might be able to answer:

    1) How fast can a computation happen (in a physical system) in thoeory.

    2) How fast could molecular gates and molecular bits effect a computation?

    3) How do you estimate these numbers would translate into Teraflops?
  • by Enoch Root ( 57473 ) on Wednesday November 03, 1999 @10:42AM (#1565896)
    If I am not mistaken, this is no different from how quantum logic computers would work. Two years ago, a computer/physics specialist gave a lecture at our physics department on how to store memory using only an atom or molecule per bit.

    Quite simply, you use the spin of a single electron to determine whether you have a 0 ('down' spin) or 1 ('up' spin) for that given atom. These are read with lasers, and I believe this can be done rather quickly.

    If this is the same thing, then the theory has existed for maybe three years, but they seem to have found a practical application for it. Before that, all they could do was use some sort of awkward prototype filled with lens for interferometry.

    If this is indeed the same thing, it also leads to a spiffy thing: fuzzy logic. Since quantum mechanics is essentially a matter of statistics, it means an electron may be in a probabilistic state between 0 and 1. For instance, it could be:

    1/Sqrt(2)|+> + 1/Sqrt(2)|->

    How this can lead to more efficient calculations, I have no clue. Still, it's cool to think of a single bit as "maybe 0 but most probably 1".

    Again, not sure if this is the same technology. It may just not be; but regardless, the idea remains a really cool one.

    "Knowledge = Power = Energy = Mass"

The finest eloquence is that which gets things done.

Working...