IBM Claims World's Smallest SRAM Memory Cell 206
nokiator writes "IBM issued a press release today claiming that it has built an SRAM memory cell that is ten times smaller than those currently available.
My interpretation of the PRese in this release is that IBM will be able to build 256Mb or 512Mb SRAM chips or integrate 32MB or more SRAM into processor dies for cache applications in the future. Of course, showing some SRAM cell prototypes is a long ways from being able to manufacture this technology in a cost effective way.
There is no information in this PR about the speed or power consumption of SRAM blocks that can be built with this new cell technology. This is not likely to be a potential DRAM replacement for mainstream applications as DRAM already offers more than ten times density compared to SRAM at much better cost."
These memory cells are so small... (Score:5, Funny)
Re:These memory cells are so small... (Score:4, Insightful)
Re:These memory cells are so small... (Score:2)
Comment removed (Score:5, Funny)
Re:These memory cells are so small... (Score:1)
But because the software standard is that old (and the POSIX people fucked this up), you actually have to write to
LLAP & LG
Rene
I'll do better than that... (Score:3, Funny)
If you want to read them back, well that's gonna cost you.
myke
Re:These memory cells are so small... (Score:4, Insightful)
Re: (Score:2)
Re:These memory cells are so small... (Score:2, Funny)
that's not PC!!! (Score:2)
Re:These memory cells are so small... (Score:2)
Re:These memory cells are so small... (Score:2)
The traditional way around this problem is just to represent 1's as a pair of zeros.
It is? How would you distinguish a one from two zeros?
Re:These memory cells are so small... (Score:2)
And how would you differentiate (Score:2)
And how would you differentiate between two zeros and a one? If all you can store are zeros, then you are limited to a unary system which requires exponentially more room than a binary system.
Re:These memory cells are so small... (Score:2)
Re:These memory cells are so small... (Score:2)
They can only store 1s (Score:2)
IBM Rocks (Score:4, Insightful)
Re:IBM Rocks (Score:1, Funny)
Re:IBM Rocks (Score:1)
Re:IBM Rocks (Score:2)
Re:IBM Rocks (Score:5, Informative)
Re:IBM Rocks (Score:2)
Re:IBM Rocks (Score:5, Informative)
Calm down, take a few valium... (Score:3, Funny)
Re:Calm down, take a few valium... (Score:3, Informative)
Re:Calm down, take a few valium... (Score:2)
Were those tears of pain, relief, joy or just plain old exhaustion?
Re:Calm down, take a few valium... (Score:2)
And your parent poster was right; IBM actually develops new technology. Although many of their products are created with the same parts as anyone else, they use that revenue to do things like invent new microprocessors, memory, hard drive technologies, you name it. IBM is a cool company, even though they're really big. I feel as though IBM has learned it's les
Re:IBM Rocks (Score:3, Insightful)
Re:IBM Rocks (Score:2)
IBM never gets out of the lab (Score:2)
E-beam, this do not sound like mass production (Score:2)
Re:E-beam, this do not sound like mass production (Score:2)
I had better elaborate (Score:3, Informative)
The problem with E-beam is that it is a serial process. In effect you have to draw every single transistor using a megnetic field to move the electron beam.
What makes IC manufacturing in general manufacturable is the fact that you are using photo lithography with an optical mask through which you expose the wafer. This is a parallel process whereas E-beam, which can be good for engineering samples, sucks for manufacturability.
Chips = what? (Score:2)
One thing I've never really known is what does this type of figure translate to in terms of real amount of RAM on a memory module that I would stick in my computer?
Its always hard to tell because there is such a long time between companies saying that they have made X Mb fit on a chip to the time that they make a 512MB dimm.
Re:Chips = what? (Score:2)
Memory manufacturers, in order to have bigger numbers, specify capacity in megabits, not megabytes. So, take the number of megabits, and divide by 8. That gives you megabytes. Now, multiply by the number of chips you want to stick on the DIMM (I've seen as low as 2, and as high as 32), and you have the total capacity.
steve
Re:Chips = what? (Score:2)
Memory manufacturers use megabits because they have done so for a long, long time. At one time, memory parts with single bit data busses were common. Even now, some devices have data widths other that eight, and not every CPU addresses eight bit bytes.
Re:Chips = what? (Score:5, Informative)
So, SRAM density has nothing to do with DIMMs you put in a computer. It's used for on-chip caches (and off-chip caches), but is too expensive for main memory. Denser SRAM means that Opteron you've got with 1M L2 cache could have 4M or 8M if IBM can mass-produce the stuff.
DIMMs usually have 16 chips (18 for ECC modules). So, if you have 512Mbit DRAMs, you put 16 of them on a module and you get 8Gbit = 1Gbyte. Gigabit DRAMs can make 2GB DIMMs. 2Gbit DRAMs are needed to make 4GB DIMMs; they cost hundreds of dollars each (and you need 16!), which is why 4GB DIMMs are so amazingly expensive.
Re:Chips = what? (Score:2)
For instance, I've seen some articles for RAM talk about companies fiting 256Mbits on a chip, but at the time of the article, 1GB dimms where commonly available and so this article wasn't really a bit deal. This led me to think tha
Re: (Score:2)
Re:Chips = what? (Score:2)
I understand the difference between a RAM and the chips on it. But what I don't understand is if there is somehow a difference between the chips that they so often talk about in articles like this one and the chips on a ram stick. Because from the numbers that are stated in various articles, I would think that there is a difference.
Besides, 256Mb X 4 != 1GB Unless you meant to say 256MB.
Re: (Score:2)
Re:Chips = what? (Score:3, Insightful)
If you're working on a 2 Meg file and you only have a 256K cache, then your CPU has to work on a little bit of it, then swap the cache to main memory to work on some more, then again, then again. Each of those swaps takes time, and main memory is way slower than cache. So if you can store most of your work in cache and save the trips to memory, your get much faster speed.
So a webserver with a 2 GHz proc
Re:Chips = what? (Score:2)
2) I thought that SRAM had to be "refreshed" on most every clock cycle, and they store information by refreshing themselves with their current state.
Re:Chips = what? (Score:3, Informative)
It's like a flipflop but wired differently. It's wicked fast because you don't have to refres
Re:Chips = what? (Score:1)
Re:Chips = what? (Score:2)
Consider: a 32-bit processor has a word-size of 32 bits, and a 64-bit processor has a word size of 64 bits. Typically this refers to the size of the int datatype on the system. But in both the x86 and x86-64 archs, for example, a byte is still 8 bits. So unless you're talking about some old-ass architecture from back in the day, a byte is 8 bits.
Re:Chips = what? (Score:2)
Memory chips are delineated in terms of Megabits. Your typical DIMM has 4, 8 or 16 chips...any more requires fun stacking tricks or a double-height PCB.
The 512 Megabit chip noted here would be 64 Megabytes per chip (64 * 8 = 512). Thus, DIMMs would have the following:
4 chips = 256MB
8 chips = 512MB
16 chips = 1024MB
Now, as it has been noted, SRAM and DRAM are not compatible, so you'll never see this on a modu
Re:Chips = what? (Score:2)
Re:Chips = what? (Score:2)
Density vs Speed (Score:3, Interesting)
I thought the PR implied "Although not as dense, SRAM is many times faster than dynamic random access memory (DRAM).", density is like a also-run.
Re:Density vs Speed (Score:2, Informative)
Re:Density vs Speed (Score:2)
In the PC world, you are correct. SRAM isn't used for main memory. In a lot of embedded applications, though, using SRAM for main memory is pretty common.
Re:Density vs Speed vs Power (Score:2, Insightful)
I remmber back in the days of the release of the original Macintosh II, a lot of articles (In Byte, MacUser, etc.) about the new 68020 architecture stated that main memory in the Mac would eventually be transitioned to SRAM because of SRAM's speed and power-consumption advantages. Cheap and dense SRAM was coming, "real soon", so that extra wait states or caches wo
50.000 at the end of a human hair (Score:2)
Re:50.000 at the end of a human hair (Score:2, Funny)
Re:50.000 at the end of a human hair (Score:2)
I like your unit ;-)
I calculated somewhere else that one naeoahh equals 50.000 times 280x280nm or six times 114x114nm. (calculating six transistors pr. SRAM cell, if they use four they are cheats and robbers)
Re:50.000 at the end of a human hair (Score:1)
Re:50.000 at the end of a human hair (Score:2)
150,000 on others... doh!!
Re:50.000 at the end of a human hair (Score:3, Funny)
Not only that, but they don't even mention how many Libraries of Congress those cells store! How am I supposed to compare press releases from different companies if they don't use the complete set of PR specs? Sheesh.
Re:50.000 at the end of a human hair (Score:2)
Look in the spec sheet, POEPMB (pounds of elephants per megabyte), it's there.
Re:50.000 at the end of a human hair (Score:2)
BTM
Re:50.000 at the end of a human hair (Score:2, Informative)
They say 50.000 at the end of a human hair. Do anybody know the actual size of this cell?
First match on Google for diameter of human hair is:
http://hypertextbook.com/facts/1999/BrianLey.shtml [hypertextbook.com]
Quote: "In my research, I have found the diameter of human hair to range from 17 to 181 [micrometer/microns]."
Assume a circular hair of 100 microns diameter, and assume the end of it is a flat circle of area Pi*Rad^2, or 7854 micron^2, divide this by 50000 and you get 0.157 micron^2 per SRAM cell.
The artic
Size Barrier (Score:1)
Re:Size Barrier (Score:1)
Cost effective? That depends... (Score:5, Insightful)
Well, were still talking about IBM here? Do you really think that a few hundred dollars more would even get noticed at clients that buy a server in the 100K range?
The main advantage of buying high-end gear from IBM, Cisco and the like isn't that you get cheap hardware ('cause you simply don't). You buy the gear from that company because you get 10 years in-house service including remote failure detection if you pay for it. That means, THEY call YOU before you even notice one of your tripple-redunant drives has problems. At this point in time, the technician is probably already on the way up to your office.
Sure, it's very expensive. But you save quite a lot by not having any significant downtime...
Seen in that context, 500 bucks more for RAM is IMHO just irrelevant to even think of...
Re:Cost effective? That depends... (Score:1)
Re:Cost effective? That depends... (Score:2)
Generally, if you open business and buy a branded servers, you're stuck to that brand a long time. Naturally many companies look for manufacturers that also offer the real big-league stuff, so if they ever need a big iron they don't have to switch the manufacturer.
AND, for selling a real big super-computer server farm (Top1000), which makes real, hard millions of profit, you have to have the big irons in stock.
Re:Cost effective? That depends... (Score:2)
The only problem is that you need BILLIONS, not millions to develop the next memory generation....
Re:Cost effective? That depends... (Score:2)
If they manage to licence this out, it can pay back that research quite handsomely, and we benefit when it is incorporated into the Pentium, Athlon and PPC chips if it increases cache capacity, maybe bandwidth & latency too, while still incr
SRAM has plusses and minuses. (Score:5, Funny)
However, I wonder if the additional implementation requirements justify the benefits. Static typing is only found in certain computer languages, and programmers have come to rely on dynamic memory allocation offered by malloc() or similar routines. I suspect with careful design one could fully exploit the advantages present -- with software being cheaper than hardware, it could easily be well worth it in embedded or pre-fabricated devices.
The type of implementor that uses (dynamic) extreme programming methodologies may be left out in the cold, although I would like to suggest that would occur anyway to a person working without a blueprint. Regardless, it will be exciting to see how this develops from the embedded perspective...
Re:SRAM has plusses and minuses. (Score:3, Insightful)
Re:SRAM has plusses and minuses. (Score:1)
Re: (Score:2)
Re:SRAM has plusses and minuses. (Score:2)
And SRAM does NOT have a smaller footprint (density) than DRAM. And its incredulously expensive compared to DRAM. SRAM is just not cost effective to use for main memory.
Re:SRAM has plusses and minuses. (Score:2)
Re: (Score:2)
parent is a troll (or a lame joke) (Score:2)
Correction: SRAM stands for "Synchronous RAM" (Score:5, Funny)
You convinced at least five posters that you were serious while simultaneously spouting an illogical collection of computer-related jargon and utterly false statements. I expect to see you here again on April 1st.
Re:SRAM has plusses and minuses. (Score:2, Interesting)
http://www.ganssle.com/misc/wom1.jpg
http://ww
This is *not* true! (Score:2, Interesting)
Doesn't seem to matter. (Score:2)
I guess this kind of stuff won't matter to the average user for another 10 years, if ever.
The Real Cost of Dram (Score:4, Interesting)
Excuse me, but isn't the cost of any feature directly related to its size, making the above statement self-redundant?
I mean, a wafer-start is a fixed cost, divided by the number of processors it yields. That makes the area of the processor die directly relate to its cost, and the size of any feature relates to its subcost portion of the overall processor cost.
Or in simple terms: Smaller features should always cost less because you get more of them per fixed cost wafer.
Am I missing something major here?
Re:The Real Cost of Dram (Score:2)
Re:The Real Cost of Dram (Score:2, Insightful)
The only other thing to think about is the yield of the wafer (ie. how many good dies you get from each wafer).
Some circuits are harder to build or more prone to failures in the manufacturing process - ADCs, DACs and high-speed opamps come to mind. Some of the flaws are inherent in the way the wafer was grown.
SRAM can serve as a DRAM replacement (Score:2, Informative)
1) No fancy control logic. DRAM needs to be refreshed on a regular basis. SRAM is a straight "chip select, read/write" type of ram.
2) low power. Because it is not being constantly refreshed, it can hold those bits with far less power. Thats why you see NVSRAM and don't see NVDRAM. Imagine having 1 gig of RAM that is battery backed up?
3) One can argue that without the control logic, it wi
Re:SRAM can serve as a DRAM replacement (Score:2)
Still, i HOPE SRAM becomes as pratical as DRAM for everyday computing some day.
Re:SRAM can serve as a DRAM replacement (Score:2)
Re:SRAM can serve as a DRAM replacement (Score:2)
So point 2 is wrong, but points 1 and 3 are right. (actually, not just theoretically faster, _DEFINITELY_ faster)
Re:SRAM can serve as a DRAM replacement (Score:2)
The slowest part of ram is the row (column?) select. (I forgot which-- one is around an order of magnitude slower I believe) I don't know much about the bleeding
Re:SRAM can serve as a DRAM replacement (Score:2)
Re:SRAM can serve as a DRAM replacement (Score:2)
They're looking into it for "instant-on" technology (say, store a few megs of bootup code you'
Re:SRAM can serve as a DRAM replacement (Score:2)
Re: (Score:2)
Re:512MB cache? (Score:4, Insightful)
a) the 64-bit CPUs that exist today have 32-bit instructions.
2) just because the width of the registers is 64-bit does not mean that all data that is processed is now 64-bit wide, 64-bit CPUs are certainly capable of processing 32-bit values.
D) somehow having a value stored as 64-bits doesn't mean you can use it for twice as many things as the same value stored as 32-bits (assuming it fits) or somehow all of your programs/algorithms can use the value twice as much as before (implying twice the work)
Re:512MB cache? (Score:2)
That is because someone decided to mix up address and data sizes when it comes to naming "n-bit CPUs".
In the good old times calling a CPU 16-bit meant that its data size was 16-bit. Whether they offered 16-bit (TMS9900), 20-bit (8086/88), 24-bit (68000), or 32-bit address space was something completely different which is why people kept talking m-bit address and n-bit data size.
Personally, I blame it on the PC market which never cared much fo
Re:512MB cache? (Score:3, Informative)
The 68000 you mention was considered a 32-bit CPU, or at least a 32/16 (32-bit internally, 16-bit data bus) as was the 68010 and it wasn't until the 68020 when it had a 32-bit data bus as well. There was even a variant that was the 68008 that was identically equivalent to the 6800
Re:512MB cache? (Score:2)
2)
D)
Urk?
-
Re:512MB cache? (Score:3, Interesting)
Also, you may not necessarily be doing twice the work with a 64-bit CPU - it again depends on your compiler. In theory you *can* do two 32-bit operatio
Re:what? (Score:3, Insightful)
Secondly, why in the *world* would you not want to cache more data if you could? Ideally, all of your main memory would be just a large one-way