Optical Cryptography 158
chill writes: "In Cryptonomicon, Neil Stephenson wrote about Bell Labs' research into using static, or chaotic signals to mask communications. A message would be generated, then the signal masked in noise. Someone on the other end would subtract out the noise to get the signal. Works great if both ends have the exact same noise. Now, Jia-ming Liu, professor of electrical engineering at UCLA, is giving a presentation on doing essentially the same thing using OC-48 (2.5 Gbps) optical circuits. The presentation will be at the upcoming Optical Fiber Communications Conference and Exhibit. There is an article covering this and some other nice advances in optical over in Wired."
steganography ? (Score:3, Insightful)
where a message is hidden in noise (the image) then when the image (noise) is subtracted the message appears.
are we still trying to re-invent the wheel here or am i missing something ?
Re:Nope: You've just given the bad guy your key. (Score:2, Insightful)
Re:Seems like a waste of noise... (Score:2, Insightful)
This wave would take up more bandwidth than either of the other two.
Re:steganography ? (Score:2, Insightful)
If you merely have to superimpose two lightwaves to steganize (sp?) a message, it all goes in realtime no matter how much bandwidth the lightwave carries.
It's not a digital technique. It uses analog lightwaves.
So that technique can be used in e.g. optical fibres, so nobody can intercept messages by physically eavesdropping on the fibre.
I don't think it's intended for home computers. It sounds more like a simple way for telephone companies to protect all the data in optic fibres without going in and encrypting the individual IP packages and such.
Re:Security through obscurity. (Score:2, Insightful)
It's really, really hard to mask a legitimate messages in random noise and hope that the bad guy won't be able to differentiate the two.
Re:Asymptotic rate is not good enough. (Score:1, Insightful)
Ok...
So you're saying Rissanen gave the theoretical limit for how quickly a compression algorithm asymptotically approaches maximum entropy in its output, and Context Tree Weighing and other algorithms actually reach that limit?
Or is this only proven for certain classes of input, like Markov models?