## Ternary Computing 375

eviltwinimposter writes:

*"This month's American Scientist has an article about base-3 or ternary number systems, and their possible advantages for computing and other applications. Base-3 hardware could be smaller because of decreased number of components and use ternary logic to return less than, greater than, or equal, rather than just the binary true or false, although as the article says, '...you're not going to find a ternary minitower in stock at CompUSA.' Ternary also comes the closest of any integer base to e, the ideal base in terms of efficiency, and has some interesting properties such as unbounded square-free sequences. Also in other formats."*
## The future holds that... (Score:3, Insightful)

Actually not a bad step- I wonder when they look at quantum computers using light

## Re:The future holds that... (Score:2)

## Re:The future holds that... (Score:3, Insightful)

Since you have an infinite number of selections to choose from, and as was demonstrated that base E is the most efficient to represent numbers in (ie, infinite representation in base e is better than other bases), then it stands to reason that quantumn computers based on light should be designed to utilize base e, but since that isn't very practical ternary might be the first logical step.

And howcome I got rated offtopic? Quantumn computing is the logical next application of ternary computing, since binary is pretty much entrenched in everything from your local computre reseller to every time you toss a dime for 'heads or tails'.

## Re:The future holds that... (Score:2)

I don't understand much about quantum computers using lightNobody knows much about quantum computers using anything...

## Re:The future holds that... (Score:4, Funny)

You're all wrong.

There can BE only ONE!

:)

## Re:The future holds that... (Score:2)

the choices will be 0, 1, and MaybeSo does this mean that computers and consumer electronics devices' power switches would stop being labeled with just 0 and 1?

Doesn't it make sense that your Ternary computer from BestBuy would have a three state power switch?

## Re:The future holds that... (Score:2)

I wonder when they look at quantum computers using lightQuantum computers using photons is a good idea because photons are very well insulated from noise and decoherence, however it is a bitch to make them interact with each other for the same reason, so gates like controlled nots will be next to impossible to implement.

There is, however, a non-deterministic QC based on linear optics where multi-qubit work gates 1/16th of the time or something. I don't expect it will ever be useful for doing real computing though

The paper is here [lanl.gov].

## Does this mean (Score:4, Funny)

## Re:Does this mean (Score:3, Funny)

They've finally invented my favorite circuit... the Maybe gateGood... Hopefully this will let us design computers with much less Bill gates.

## Re:Does this mean (Score:2)

<g>

-l

## Re:Does this mean (Score:2)

## well known (Score:2, Insightful)

The thing is, it's simpler to manufacture binary logic than ternary.

So, no big deal really... the choices were made some time ago.

Next step: quantum computing.

## Not base3 again (Score:4, Insightful)

## Re:Not base3 again (Score:3, Interesting)

## Re:Not base3 again (Score:3, Insightful)

The reason that the computer industry grows exponentially is exactly these kinds of paradigm-changing technologies.Most of these have happened in manufacturing processes, but I think as we exhaust that field we will be pushing the changes higher up the architecture. (x86, your days are numbered!)

That said, base 3 is probably pretty stupid. Asynchronous circuits, however, might really make a difference some day...

## Another reason... (Score:2)

## Ternary has been known to be efficient... (Score:5, Informative)

Number Systems.

There is an extended discussion on the balanced

ternary system and some other exotic number

systems (base 2i etc). There are some merits

to the ternary system but it would be

harder to implement with transistors.

## Re:Ternary has been known to be efficient... (Score:5, Informative)

but it would be harder to implement with transistors.Very apt. A binary transistor has two states, idealized "on" and "off". From a more analog view that's low current and high current - appropriately connected with a resistor that results in low and high voltages.

The nice feature is, that a high voltage at the input opens the transistor, a low voltage closes it. So we get a relatively complete system, I can get from hi to lo, from lo to hi.

Tertary would put us into "middle" voltage. But middle on the input, creates middle on the output, no direct way to get either high or low - making basic circuits more complex.

But the real killer with "middle" is manufacturing. Let's say we use 2.8 Volts for the high level, 0.2 Volts for the low level. Due to manufacturing tolerances some chips transistors would be "fully" open at 2.3 Volts, others at 2.7 Volts. Easy to compensate on binary designs, you just use the 2.8 to switch the transistor, but for the middle level? What's required to switch a transistor to middle on one chip, is sufficient to open the transistor completely on another chip...

So your manufacturing tolerances become way smaller, and that of course reduces yield which increases cost.

Add to that, that chips today work with a variety of "hi" voltages like 5, 3.3, 2.8 ...
Most lower-voltage chips are compatible with higher-voltage ones, they produce voltages which
are still over the switching point and accept higher voltages than they operate on.

With ternary that becomes impossible and chip manufacturers need to progressively lower the voltages for higher speed.

Plus disadvantages in power consumption and and and...

Admittedly the article doesn't seem to suggest that ternary is viable, just that it's pretty. Which may be true for a mathematician. :)

## Don't get hung up on transistors. (Score:5, Interesting)

I just dug out my old physical electronis book (Micro Electronics, by Jacob Millman, First edition), and can't find them in there, so here's a slightly less academic reference. [americanmicrosemi.com]

There might be some problems with trying to get the clock speed high enough to compete with the Intel/AMD marketing, though; it says that they can be triggered into conduction by high dV/dt.

## Re:Ternary has been known to be efficient... (Score:2)

Now, I'm obviously not a EE major (I just happen to know a smidgen from my current job), so i may be WAY off base. But...?

weylin

## Re:Ternary has been known to be efficient... (Score:2)

"neither" is vague. It's close to saying it's either light, dark or neighter. Transistors have two basic modes of operation (that I'm aware of) that fits a garden hose analogy; You either step on the hose (blocking current) or physically remove the foot (to allow current). You have to physically attach the gate to either a positive source or a negative drain, otherwise static charge will keep it in what-ever state it was previously. The output of a MOSFET is usually the gate of another mostfet. Moreover, the gate of a MOSFET is like a capacitor, so the role of the gate-controller logic is to charge or discharge the gate-capacitor. Thus having some "dim" state that is neither blindingly bright nor pitch dark only slows the process of opening or closing (charging/discharing) the targeted gate.

You might hear about a third "Z" state (cheater latches), but that's just when you don't connect the source of the FET to power or ground, but instead to the output of some other combinational logic. The idea is that when the FET is pinched off, you've electrically isolated one logic component from another. This is good for multiplexing (like memory arrays), where you'll have the output of multiple logic devices connected to a single input of another logic device, and you "activate" only one output device at a time. (The alternative would be to risk having the output of one device +5V, and the output of another device 0V, thereby short-circuiting).

Bipolar transistors are even worse since they never truely pinch-off, and even leak current into the control. But they have more efficient current characterisitcs (higher current swings) and thus work well to amplify the charging

These are the basic uses of FETs that I've encountered in my undergraduate EE cariculum. End result, there's no free lunch with trinary logic systems and current transistors.

-Michael

## Re:Ternary has been known to be efficient... (Score:2, Interesting)

Simply put it's a transistor that has different transport properties for either spin up or spin down electrons. The Giant Magneto Resistive (GMR) head in your fancy new 100GB hard disk is a fine example of spin effects being used in every day life. A similar device could be used for doing base 3 arithmatic.

A while back I did some simulations (admittedly simple and first order) of seperating different spins without using ferromagnetic materials. Which are used in GMR devices and are basically nasty things to use in device fabrication that should be avoided if at all possible. I found that you can get pretty good separation from the geometry of the device and an applied external magnetic field. This was all done with device sizes and parameters that were reasonable (a few microns in size, not huge magnetic fields, etc) for real life.

Imagine a transistor with a single source and two drains one for spin up and one for spin down.

Not to say it would be easy, just that it's possible given a little ingenuity to make a transitor that has 3 states.

## Re:Ternary has been known to be efficient... (Score:3, Insightful)

Why would you choose such a brain dead scheme? 2.8V as your "middle" choice? A sensible scheme would have been +ve rail, -ve rail, and ground. This builds upon 100 years of analog electronics and op-amps. Locking a voltage to a rail is extremely easy AND fast.

The benefit of a ternary scheme is that you have LESS power consumption to achieve the same work. Your individual flip-flap-flops are more complex than a binary flip-flop, but you need fewer flip-flap-flops. Overall you'll have fewer transistors and subsequently less heat than the equivalent binary circuit.

The fact that fewer transistors are required to achieve the same work (despite the fact that there are more transistors per gate) will INCREASE the yields. This DECREASES costs.

How in hell did your post get modded up?

## Re:Ternary has been known to be efficient... (Score:3, Interesting)

## Trits? (Score:4, Funny)

Setun operated on numbers composed of 18 ternary digits, or tritsAwww...they shied away from the obvious choice,

tits.## Re:Trits? (Score:2, Funny)

## Re:Trits? (Score:5, Funny)

Awww...they shied away from the obvious choice, tits.No, I think that was a good decision. When I think of tits, I always imagine them in pairs.

## Re:Trits? (Score:2, Informative)

Setun operated on numbers composed of 18 ternary digits, or tritsAwww...they shied away from the obvious choice, tits.Just to be more serious and perferctionistic about it. Shouldn't the word

digitin this case be atrigit? Since the very word digit is prefaced withdiwhich means two? I guess I could be wrong about that, but it seems to make sense.## Re:Trits? (Score:2, Informative)

IIRC, the origin of digit is not from di- meaning two, but from digit meaning finger or toe. This makes some sense if you think about where numbering systems came from. FWIW, one advantage of binary is that it's very easy to count in binary on your fingers; your thumb is the ones bit, index finger twos bit, middle finger fours bit, etc. Not quite as easy to do in ternary.

## Re:Trits? (Score:2)

Think about it. If they were called digits from di- for two, don't you think that it would be called dinary instead of bi nary?

Well, it is language, not math, we are really discussing. And language isn't always known for it's logic. The prefix :-)

didoes mean two in some instances. Hydrogen-dioxide for one. We don't call it Hydrogen-bioxide do we? Anyway, you were right and I was wrong in regard to my initial post so somebody feel free to take my one karma point away## Re:Trits? (Score:2)

## Yes! Tits! (Score:5, Funny)

And, instead of a 'nibble' being four bits, we'd have a 'suckle' equaling three tits, like that babe in the movie Total Recall.

Instead of dealing in megabits or gigabytes, we'd have gigatits, which could be abbreviated as DD, saving vast amounts of bandwidth -- which might as well be called handwidth now -- or terateets [terapatrick.com], abbreviatable as DDD.

With all the sexual content in technical lingo (e.g., male and female plugs, master/slave, unix, etc.) this is only a natural development, and given that half of these machines are used for nothing but downloading pictures of naked breasts anyways...

## Re:Yes! Tits! (Score:2)

## Good Old binary and Floating Point. (Score:5, Interesting)

It must be remembered that, for floating point numbers, base 2 is *the* most efficient representation, as argued in the classic paper "What Every computer Scientist Should Know About Floating Point Arithmetic [sun.com]" by David Goldberg. The deep understanding behind IEEE754 is a masterpiece of numerical engineering that is often overlooked, IMO.

## No, please don't! (Score:2, Funny)

10 seconds

3 9 27 81...ummmm...crap

10 seconds

This'll make all my computer-numbering knowledge obsolete

## Trinary Digits (Score:2, Funny)

[schlockmercenary.com]

http://www.schlockmercenary.com/d/20001226.html

## astronomers used it since 80's (Score:5, Interesting)

I myself worked on VLBI in the same lab but our machines were using 1-bit digitization (BTW, we used regular video cassette and somewhat modified VCR to record 7 GBytes on single 2-hour tape).

## SETUN - Russian ternary computer (Score:5, Informative)

## Re:SETUN - Russian ternary computer (Score:2)

## Consider base -2. (Score:3, Interesting)

Think about applying it to D/A and A/D conversion for AC signals. It could simplify a flash converter,a nd conversion to convention twos-complement signed integers can be performed by a hard-wired lookup table.

-jcr

## Re:Consider base -2. (Score:2)

Think about applying it to D/A and A/D conversion for AC signalsDo Anonymous Cowards provide signal? What's the S/N ratio for ACs, anyways?

## Best Quote (Score:2, Funny)

I can always go for a cheap three-some.

Pat

## what a pain... (Score:2, Interesting)

## It's axiomatic (Score:2)

## Nifty. Looks like... (Score:2)

... I might find that that adaptive image compression scheme I was using years ago might turn out to be useful after all. (Some parts of it would have been tons more practical if you used a ternary coding.) Now if I can find the source code amongst all those 360KB floppies that I've been meaning to burn onto CDs

andconvert it from FORTRAN...## Ternary Logic...... (Score:2)

it seems to me that this is valid logic, now if we can just come up with a transister that can do this sort of thing.

I am sure we can come up with a muti transistor system made of 2 transisters, but what would be the economic savings of having a ternary logic system if you double the transisters?

## Re:Ternary Logic......Opps dumb HTML (Score:2)

if Y<X stop

if Y=X swithch left

if Y>X switch right

## Fascinating, but not practical, here's why: (Score:5, Informative)

First of all, hardware is getting smaller and smaller all the time, so the whole premise behind ternary computing (base 3 would use less hardware) doesn't apply, especially since brand new gates would have to be made in order to distinguish between 3 signal levels rather than 2, and that would be taking a HUGE step backwards.

Secondly, doing things on a chip or two is great, but the main problem in computing is communications. The major part of creating efficient communications protocols is determining the probability of a bit error. Probability is a very complicated science, even using the binary distribution, which is a very simple function (that just happens to escape me at the moment.) Now, add another bit, and you have to use a trinary distribution, which I'm sure exists but isn't very common (and not surprisingly, I can't recall that one either). Long story short, this theoretical math has been made practical in computer communications over a long period of time dating back 50 years, starting all over with 3 bits rather than 2 would be extremely complicated and VERY, VERY expensive.

Finally, figuring out logical schemes for advanced, specialized chips is a daunting task. Engineers have come up with shortcuts over the years (K-maps, state diagrams, special algorithms, etc) but adding in a 3rd state to each input would make things almost impossibly complicated. All computer engineers working at the hardware level would have to be re-educated, starting with the simplest of logical gates.

Overall, in my humble opinion, we'll never see large scale use of ternary computing. There's just too much overhead involved in switching over the way of doing things at such a fundamental level. The way hardware advances each year, things are getting smaller and smaller without switching the number base, so until we reach the limit using binary, we'll probably stick with it.

## Re:Fascinating, but not practical, here's why: (Score:2)

## Re:Fascinating, but not practical, here's why: (Score:2, Interesting)

The major part of creating efficient communications protocols is determining the probability of a bit error.You have made some very good points, and the bit error problem is one of the big ones. When you go to ternary logic levels, you reduce the noise margin, so you have to slow down the clock and/or spread out the logic (more space) which offsets the gains you might get from ternary logic.

I once saw a point-to-point ternary logic data bus design that looked very clever on paper. It allowed simultaneous transfer of data in two directions on the same wires. Both ends of the bus were allowed to drive 0 or 1, and both ends watched the ternary logic level on the bus. If the middle state, "M", was observed, then the other end must be driving the opposite logic level.

This looks like a big win since the same wires can carry twice as much traffic than a normal binary bus, but the reality of noise margin made the bus impractical. The noise from the dynamic voltage swing between 0 and 1 made it difficult to reliably discriminate the smaller 0/M or M/1 voltages at high clock rates. The clock rate had to be slowed to less than half the speed of a binary bus which made the ternary bus lose its apparent advantage.

I won't even get into the headaches that ternary logic design would cause. It makes my binary brain hurt.

## Re:Fascinating, but not practical, here's why: (Score:2, Insightful)

On the contrary--the "theoretical math" was never developed for a specific representation of information, much less binary. In fact, information theory accomodates any representation, all the way back to Shannon [bell-labs.com]..

The real difficulty is physical implementation. Coming up with coding schemes is trivial.

## Re:Fascinating, but not practical, here's why: (Score:3, Informative)

Now, add another bit, and you have to use a trinary distribution, which I'm sure exists but isn't very common (and not surprisingly, I can't recall that one either).Well, I don't think that the probability is really much worse. Instead of binomial, we have in general multinomial, and here trinomial: pdf=(n!/(x_i!*x_j!*x_k!))(p_i^{x_i}*p_j^{x_j)*p_k

See Berger's Statistical Decision Theory and Bayesian Analysis. Or here [home.cern.ch] or here [uah.edu].

There are some hardware problems; I posted a possible solution

A more serious problem is mentioned by anohter poster: floating point [sun.com] is where we really, really care about speed and efficiency, and it seems that binary has that sewn up.

Quite right. This is the only argument against it which doesn't have an answer, I suspect.

## Applications on Clockless Logic (Score:3, Interesting)

You really couldn't be more wrong! Ternary logic is at the basis of some of the hottest research in asynchronous logic design right now.

For instance, if you had a group of transistors that computed multiplication and stored the output in a register you might see the value of that register change several times until the computation was completed. Right now, the only way that you know a computation is complete is that logic is designed to complete an action in X cycles; as long as you feed in the data and wait X cycles you will get the proper result. Clock cycles can be waisted here, because a simple multiplication might be completed in a single clock while harder multiplications might take the full amount of time the logic area is spec'ed for.

Using async logic, this can be done much more effciently. The multiplication happens just as soon as input data is given and the next stage of the logic knows when the operation is complete because its wires has three states: 0, 1, and not-yet-done. As soon as all the wires are 0 or 1, the computation is finished (consequently, this is how input works to). There are no "wasted" clock cycles, stuff moves through the logic as quickly as it is completed.

Of course, there has been some debate whether three states are needed on each wire, or an just additional acknowledgement wire is needed (say 8 wires + 1 for an 8-bit computation block). But, believe it or not there are already patents for both methods!

I guess, by having true ternary logic on each wire, you could have logic that will grab a result just as soon as X% of the wires report they are done with the computation to get "good enough" answer if the logic is iteratively improving a problem.

-AP

## Niche applications (Score:2)

You're right about the communication problem, IMO. That's why if ternary computing takes on, it will be limited to "inside" a single chip for a while. For example, an FPU or a DSP processor could make use of ternary arithmetic internally, converting to and from binary when you need to go off-chip. That

mayhave advantages. A general-purpose ternary computer, however, probably won't be useful for a very long time if at all.## Didn't your mom ever tell you? (Score:2)

## And there's already a language for it! (Score:3, Funny)

TriINTERCAL [muppetlabs.com]! (the link is about INTERCAL, chapter 6 is about the TriINTERCAL extension)

I can't wait until college courses are taught in this truly wonderous and -- who would have thought -- futuristic language.

## I patent... (Score:2)

x <<< 3 is (usually) x*3

and x >>> 3 is (usually) x/3.

The representation of negative numbers is interesting but there is a 3's complement scheme that works. Eg.

## Re:I patent... (Score:2)

x <<< 3 is (usually) x*3Wouldn't that be x*27? x <<< 1 would be x*3.

## How would you get clean state transitions? (Score:2)

I don't see how one can design something as fast as a binary system, and still allow us to go from 0 to 2 without going through 1. If you are doing voltages, the intermediate value theorem forces you to go through state 1. Similarly if you are doing tones.

One can design a logic system, with "forbidden" state transitions. But then you would have to argue that "ternary logic with forbidden transitions" is significantly better than "binary logic." It seems to me that you would lose 90% of your advantage if you forbade 0 - 2 and 2 - 0 transitions.

## not to be an engineer... (Score:4, Informative)

The reason that you can't get, and won't for a long time, anything greater than base 2 is that setting and sensing more than two logical levels in a given voltage range is very hard. Those ones and zeros you like to look at and think about discretely are not really ones and zeros, but voltages close to those that represent one and zero, close enough to not confuse the physics of the device in question.

For example, if you arbitrarily define 0 volts to be a 0 and 1 volt to be 1 in an equally useless and arbitrary circuit, and you monitor the voltage, what do you assume is happening if one of your discrete samples is

Now, I remember something about double flash memory densities by sensing 4 voltage ranges in each cell, but I imagine the timing precision required to do that correctly is orders of magnitude easier to do (and still a royal pain) than putting ternary logic into a modern microprocessor (with tens of millions of transistors, implementing everything created in the entire history of computing that might be even marginally useful so that you can 3 more frames per second in quake3).

## Gratuitous rain? (Score:5, Interesting)

I am shocked, shocked to discover that a fundamental computer architecture explored in the 1950's, rejected as unworkable, and forgotten is in fact unworkable.

The feeling that this induces has no word in English, but in Japanese it's called

yappari.## Huh? (Score:2)

A/D controllers do this all the time, for larger bases, often on the order of 256 to 24 million. Granted, the digital results don't have logical consequences. But you can't ignore the fact that binary systems have to set tolerance levels just the same as ternary systems.

## Difficulties in Implementation (Score:2, Insightful)

I vaguely remember discussing this in a Computer Science class on circuit design four or five years back. While this might be possible for some sort of non-copper processor, I imagine the difficulty would be in rapidly distinguishing correct voltages for each bit on today's technology.

In simplistic terms, presently, if you have two bits, at a clock cycle, the electrical current is either 0 (0 volts) or 1 (3.3 volts). Theoretically, you could have an infinite number of bits, provided you had infinite voltage precision. Thus, 0=0v, 1=.1v, 2=.2v,

However, your processor is probably designed with a tolerance in mind, thus 3.1 volts is probably a 1, and

I'm sure there's a PhD EE somewhere in this crowd that can explain this even better, but my point is that I don't think anything but binary computers are useful with current electrical technology. Presently, there's a reason we use two bits - because it's easy and *fast* to check "on or off" without having to determine how "on" is "on". Now, if one was able to use fiber and send colors along with the pulses, then you might have something...

## Turing Theory... Complexity Analysis .. blah blah. (Score:2, Insightful)

One of the earlier posters had something mentioned about it all being two dimensional... actually, a good way to look at computation is using what Turing devised - a one dimensional model of computation based upon a single tape.

In studying Turing Machines, the mathematical model based upon (potentially infinitely long) tapes is used extensively. Move the tape right, left, and modify what is under the head, for example, are the primitive operations. A set of functions defines how symbols are changed, and when computation halts, as well as the resulting halt state.

A basic examination of binary versus ternary systems, based on Turing Machines, and some (basic) complexity Theory...

In binary systems, computation trees build at the rate of 2^n, where n is the number of computational steps...

In a trinary system, we are looking at 3^n.

So, performance could be considered in terms of - I believe 3^n - 2^ n, i.e., polynomial, not exponential) differences in processing power.

But, any binary system could by used to -simulate- a 3^n system through the use of a (at worst polynomially larger) set of functions and / or chunkings of data (to represent the 3 states in binary, repeatedly). Also, necessary encodings could be performed by 'chuncking' the ternary data into blocks.

Polynomial gains are nice, but at best, we don't have an earth-shattering enhancement.

P.S. Some of this may be a bit rusty, so if anyone has a more concrete analysis or corrections, feel free...

Sam Nitzberg

sam@iamsam.com

http://www.iamsam.com

## Been done (Score:2, Informative)

Russian translation of Knuth's Volume 2 was quite funny. Knuth is saying that "Somewhere, some strange person might actually build a ternary computer". The russian translation had a translators footnote "Actually, it has been built in russia in 1960s".

See this page for more information about setun:

http://www.computer-museum.ru/english/setun.htm

## Old idea, but causes problems... (Score:2)

The problem is that although you reduce the number of gates, the gates themselves get horribly complex. There are only 16 possible two-input binary gates, of which two are trivial, two are wires, two are NOTs, two are ANDN, two are ORN, and one each of AND, NAND, OR, NOR, XOR and XNOR. All of these are familiar gates. However, there are no less than 19683 two-input ternary gates. If you sacrifice some of the combinations, you suddenly are doing less than true ternary computation, and you're wasting the power of your machine.

That, in combination with the sheer commonality of true/false type states, means, in my opinion, that binary is here to stay.

## Units? (Score:4, Funny)

Obviously, there's the basic unit of storage (1, 0, -1; on, off, undefined; true, false, maybe; whatever). We called this a trit for obvious reasons of parallel to the binary world.

Ok, good enough so far. Then, there's the basic unit that's used to store characters or very simple numbers. We decided that 9 trits would be good (this was to allow for UNICODE-like representations). This seemed to be a shoe-in for the title, tryte.

Then, you occasionally want to have something that is used in firmware to sub-divide trytes into various fields. In binary we call this a nibble, so in honor of Star Trek we called this one (3 trits) a tribble.

But, there it stopped, as we soon realized what we'd be measuring the system's word-size in.... Man, I thought SCSI was a painful phrase to use all the time

;-)## Ternary RAM (Score:2)

Most desktop computers use dynamic RAM to achieve high densities at low cost. (It's not unusual for new desktop systems to have 1G of RAM in about 4 modules.) They work on the principal of charging tiny capacitors. Charged represents a 1 for instance and not charged represents a 0. But capacitors can be charged in one of two polarities (one plate negative with respect to the other, or positive). Thus it seems that it would be a small step to go from binary dynamic RAM to ternary. The supporting electronics and refresh circuitry would be a bit more sophisticated, but the capacitor array might be basically the same, resulting in increased capacity (log3/log2) with about the same real estate! So perhaps dynamic RAM is really optimal in ternary as well.

I'm not an electrical engineer; the above is merely speculation off the top of my head. Does anyone more qualified than myself have any thoughts on this?

## Re:Ternary RAM (Score:2)

But, if the computer uses ternary logic, then to keep the one to one correspondence between capacitors and lookups (which I think would be necessary for efficiency) one could use a three state system: not charged, and charged with either of two polarities. This, I think, would be as efficient as far as timing goes, and give you the 50% savings in size.

## Q: Ternary in software? (Score:2)

## Insults (Score:2)

## Intercal (Score:3, Funny)

## Ternary trees (Score:2, Offtopic)

We just did some testing, comparing those search algorithms with eachother. Although hashes are more or less comparable in speed with ternary trees, binary trees are much slower.

Some sample output: (btw, we didn't balance the ternary tree, although we did some really basic balancing on the binary tree).

Clearly the ternary tree and hash are much faster than the binary tree. Although there are still some optimisations to make, we believe that the ternary tree will outperform the binary tree at all times.

We also made some (very) cool graphs with Graphviz, but unfortunately have no good place to share it with the rest of the

## Why this is useful (Score:3, Insightful)

The solution? Don't use electric circuits...don't use transistors.

Electric circuits will only get us so far, and then we'll have to move on to more 'exotic' hardware -- optical computing, molecular computing, quantum computing.......

Suppose a qubit's state is describe by the spin polarization of an electron pair -- they can either be both up, both down, or one of each -- you can't tell which one, so it's actually 3 states (balanced at that)......

In optical computing, suppose you can push the frequency of the lasers a little in either direction of 'neutral'...this is also base 3.

So what I'm trying to say is, don't just say "base-3 computing is not practical with current technology" -- because it isn't, but it WILL be practical (perhaps even more so than binary computing) with future technology.

And to finish with something lighter...

troolean x, y, z;

x = true;

y = false;

z = existentialism;

:)

## The answer... (Score:2, Funny)

Any system that can't spell "42" is not worth it.

## the Epiphany of the File Cabinet (Score:2)

Then came my Epiphany of the File Cabinet a few weeks ago ...When counting, thou shalt not stop at One. Neither shalt thou go on to Three.

## What about software? (Score:2, Interesting)

By the way, Ada does have support for trit operations (in some bizarre way), but this was merely an accident.

## Never work (Score:2)

## Ternary is cheaper for mathematics, not engineers (Score:2, Insightful)

So system with 2*3=6 transistors can count to 9 in ternary while in binary only to 8. When searching maximum for f(x) = x^(const/x), one ends up with e for all const>1. That's why the mention that the 3 is the closest to e - an number base ideal. I remember having that case in mathematics competition way back in 8th grade.

In engineering practice, that is quite far from truth. In ECL logic the ternary half-adder requires the same number of transistors as in binary logic. It requires the same number of wires to carry ternary digit as binary one. However we all know why is ECL nearly extinct - its high consumption prevents high integration.

The benefits of binary logic can be seen in CMOS, where we have two states, each of which consumes also no power and still has low output impedance.

Petrus

## happy juice (Score:2)

## Can it run MIT's new "Cesium OS"? (Score:2)

Imagine what you could do with Cesium OS [slashdot.org] running on a ternary computer. Even better, a distributed system of Beowulf clusters running Cesium-based ternary processes! And perhaps a Natlie Portman wall paper, while looking at your fake Britney Spears porno at 3am, eating a taco with an old X-Files on television in the background...

## Re:Less than, greater than, or equal? (Score:5, Funny)

Nope: one, zero, and CowboyNeal.

## Re:Faster to just get rid of 0's (Score:2, Funny)

## Re:Nondigital computing: Root Not (Score:4, Informative)

The problem is understanding the new metaphors required to implement new modes of math. Simply adding a third state doesn't get you a revolutionalry new mode of computation, it just gets you more bits per wire. For example, look at flash technology: they now store multiple bits per cell by designing sense amps to convert the analog level to a binary pattern.

Read the book "An Introduction to Quantum Computing". I forget the author, but it's the one that comes with the CD of mathematica examples.

In this book they discuss a simple adder that Feynman derived. The realization of the Hamiltonion operator (similar to the transfer function H(s)) requires a gate called:

Square root of NOT!

It's pretty crazy, but when you walk through the example step-by-step, it becomes more clear why it is needed to build the simple adder.

Now how you actually build a root-not gate is another problem, but I'm just making this point to illustrate how "meta" the new concepts have to be to truly revolutionize computation.

There's simply nothing better than binary right now.

## Re:Nondigital computing: Root Not (Score:2)

Close, but you are still doing digital computing! Just because it's not binary doesn't mean it isn't digital.

Did anyone say it wasn't digital? Or did you confuse "variants of digital computing" with "alternatives to digital computing"?

## Re:Nondigital computing: Root Not (Score:2)

Did anyone say it wasn't digital?Yep. Look at the thread's title.

-- Kaufmann

## Re:Nondigital computing: Root Not (Score:2)

## Re:Nondigital computing: Root Not (Score:2)

There is nothing strange about sqrt of NOT, except its name.yes, but you say that and then you go on to give a really strange explanation!

the interesting point is:

a -- sqrtNot --> b

...is a probability, but...

a --> sqrtNot --> sqrNot --> b

...is absolutely certain as it reduces to a = ~b. kinda like the three-polaroid filter experiment for light, i guess.

regardless, i think in the early stages of any new science it's ok to use the term 'weird', maybe even, 'bizzarre', or as some may so, 'hella dope".

## Re:Nondigital computing (Score:2, Interesting)

Up, down, top,

bottom, strange, charmed

blue, red, green

Yikes!

## Re:Nondigital computing (Score:3, Informative)

particles are come in (isospin) pairs.

Under SU(2) (weak force pairs)

Electron Neutrino

up down

strange charmed

bottom top

proton neutron (which is up down again)

blue red green because color has

SU(3) symmetry

## That's a cut and paste from Usenet -- again! (Score:2)

## Re:Nondigital computing (Score:2)

## Re:base e, base schmeee (Score:2)

;-)

## Re:base e, base schmeee (Score:2)

http://www.urbanlegends.com/legal/pi_indiana.ht

-l

## Re:Efficiency of base? (Score:2)

Just as there may be practical limitations behind precisely maximising the volume for a soup can, doesn't mean that getting close will not allow you to approach the maxima of the function.

A better question would've been why use his measurement of efficiency, a question the article examines.

## Re:Efficiency of base? (Score:2)

Integer steps in the function are the correct choices, just like a soup can probably needs to fit certain machine constraints.

I was taking exception to your claim that chosing the closest integer was meaningless.

In any case, a darn cool article. Well written, clear, and entertaining - so my nitpickiness threshhold has climbed significantly.

## Re:Efficiency of base? (Score:2)

## grammar nazi Re:how about quadratic? (Score:2)

<g>

-l

## Re:'e' is the perfect base? (Score:2)

enotation, is less than in any other base.## Re:Ternary Computing and the rw measure (Score:2)