Forgot your password?
typodupeerror
Math Encryption Security Science

A Mighty Number Falls 348

Posted by kdawson
from the time-to-generate-new-keys dept.
space_in_your_face writes "An international team has broken a long-standing record in an impressive feat of calculation. On March 6, computer clusters from three institutions (the EPFL, the University of Bonn, and NTT in Japan) reached the end of eleven months of strenuous calculation, churning out the prime factors of a well-known, hard-to-factor number — 2^1039 - 1 — that is 307 digits long." The lead researcher believes "the writing is on the wall" for 1024-bit encryption. "Last time, it took nine years for us to generalize from a special to a non-special hard-to factor number (155 digits). I won't make predictions, but let's just say it might be a good idea to stay tuned."
This discussion has been archived. No new comments can be posted.

A Mighty Number Falls

Comments Filter:
  • by Raul654 (453029) on Tuesday May 22, 2007 @02:34PM (#19225009) Homepage
    For an embarrassingly parallel, strictly integer application like this, I think the logical next step is to attack it with FPGAs. For such an application, it wouldn't surprise me if a large Alterera FPGA could give you at least the same computation power as a large cluster, for a fraction of the price (both for the hardware and the electricity to power the thing).
  • Rather than just digesting using some key, It seems to me that you could set up two 'encryption' agents which talk to each other and form a random proprietary "language" that only each other can understand. This would be very much like a one time pad [wikipedia.org] - which is basically the only truly unbreakable encryption:

    Code Talkers.

    The Navajo language basically served as a one time pad in WWII - why not use programs which generate their own method of communication (their own "language") for use in transmitting information.

    You simply could not crack it unless you already knew the information being sent.
  • by JohnA (131062) <johnanderson@gmai[ ]om ['l.c' in gap]> on Tuesday May 22, 2007 @02:40PM (#19225119) Homepage
    ...is that most Certificate Authorities who have trusted certs in the major browsers / e-mail programs will NOT sign a certificate for any keysize greater than 1024 bits.

    This artificial limitation is going to become more and more glaringly obvious as time goes on.
  • by blantonl (784786) on Tuesday May 22, 2007 @02:48PM (#19225269) Homepage
    What exactly do they mean by the "the writing is on the wall" for 1024-bit encryption? Does the 307 digit long set of prime factors have some bearing on the ability to break encryption, or is it just a reference to the amount of sheer computing power out in the industry today?

    I'm having a hard time making the coorelation.

  • by CastrTroy (595695) on Tuesday May 22, 2007 @03:00PM (#19225475) Homepage
    But with this kind of computation time, you just have to send lots of junk traffic to make them waste all their computing resources. If you send out 500 messages a day, only 1 of which has actual usable information in it, then they are going to be wasting a lot of computing resources just to find out which messages actually have usable information. With computation times this high, it would be easy to flood them with data so that they wouldn't have enough time to decrypt everything.
  • Quadruple AES? (Score:3, Interesting)

    by W2k (540424) <.moc.liamg. .ta. .suilesnevs.mlehliw.> on Tuesday May 22, 2007 @03:06PM (#19225561) Homepage Journal
    I'm hoping there are some crypto geeks in the audience who can answer this. I know that back in the days when DES (with 56-bit keys) was the best there was, some genius invented TDES, which was simply three passes of DES, for a total key length of 168 bits. However, running DES thrice does not triple the "security" (resistance to brute-force cracking) of the cipher, rather the 168 bit key provides security equal to that of a 112 bit key due to some mathematical technicality that I've forgotten.

    Now for my actual question. There isn't a symmetric crypto algorithm that I know of that can use 1024 bit keys (except for stream ciphers, maybe RC4?); the best block cipher is AES (Rijndael) which supports 256 bit keys. If one would "invent" QAES, i e quadruple QAES, for a total key length of 1024 bits, what would the "effective" key length be?
  • by Kadin2048 (468275) * <[slashdot.kadin] [at] [xoxy.net]> on Tuesday May 22, 2007 @03:08PM (#19225589) Homepage Journal
    I hate to be the guy who pulls out the tinfoil, but why not.

    A few weeks ago I was reading Steven Levy's Crypto (not a bad book, although a little out-of-date now, but it brings back the dot-com nostalgia), in which he spends a lot of time describing the NSA's objections to strong civilian crypto in the U.S. in the 80s and early 90s. They went from absolutely opposing civilian crypto (particularly public-key stuff) tooth and nail, to suddenly just throwing in the towel. While I'm sure that much of that was just political pragmatism -- with the Cold War over, they were having a harder and harder time maintaining their objections in the face of 'progress' (in the form of a lot of pressure on Congress from business and the tech sector) -- but I can't help but wondering if they didn't figure something out that made them withdraw their objections to bigger key sizes.

    Particularly since it's now known that some people on the government side knew about public-key crypto before it became public (the early-70s GCHQ paper, and I find it hard to believe that at its peak during the Cold War, someone at the NSA didn't find the same thing), they've had a long time to work on the problem -- though it's possible that they just totally squandered whatever lead they had, and are now at the same point that the unclassified world is, that just seems unlikely to me.
  • by wfberg (24378) on Tuesday May 22, 2007 @03:20PM (#19225791)
    The Navajo language basically served as a one time pad in WWII

    No, they served as code-talkers. A one-time pad is a system whereby every bit of the encryption key is independent of the others (never reused, unlike codewords) and entropy is maximal. Simply translating stuff from one word to another is simple substitution, a simple code.

    The reason Navajo Code Talkers were succesful wasn't because the scheme was particularly advanced. In fact, it would have been computationally trivial to break. However the messages relayed were only ever "tactical" in nature; i.e. communications in the field, of use during a fight, but old news in about 10 minutes. Had Navajo code talking been used to relay top-secret messages, it would have been broken fairly quickly. The reason for its success was that is was extremely cheap to implement for the US, and the secrets protected weren't valuable enough to spend huge effort on breaking. Economics, rather than mathematics.

    Navajo wasn't used in Europe, because Germany had sent anthropologists to the US to learn native languages, anticipating precisely this scheme.
  • by CastrTroy (595695) on Tuesday May 22, 2007 @03:22PM (#19225845) Homepage
    Really it's not that bad of an idea. Create something that looks like image spam. Hide the encrypted information using stenography in the image, and send it out to millions of people, including the intended recipient. Everybody except the intended recipient deletes the message. It makes it harder to track down who you are communicating with, and harder to find out which messages actually contain useful information. It's similar to in olden days when they used to put a secret message in the classifieds of the newspaper. Only the people who know that it was supposed to be there could actually get the hidden message, but it was there for everyone to see.
  • by wamatt (782485) on Tuesday May 22, 2007 @03:30PM (#19225979)
    Not sure if this is a new idea, but this topic got me thinking. Decrypting something means is really just a mathematical transform. We say its "decrypted" if the end result "makes sense". But what if we didn't know what the final data should look like? How would we ever know it was decrypted?

    Decryption itself only makes sense once we know what method was used, ie RSA, DES, Blowfish etc. However what if that algorithm itself was dynamic and formed part of the encryption? Sort of like a more generalised version of onion encryption, ie encrpyting the same content a number of times using different algorithms. So that the algorithms used and the sequence in which they are used form a sort of "meta-key"

  • by Podcaster (1098781) on Tuesday May 22, 2007 @03:59PM (#19226429) Homepage Journal

    From TFA:

    Is the writing on the wall for 1024-bit encryption?"The answer to that question is an unqualified yes," says Lenstra. For the moment the standard is still secure, because it is much more difficult to factor a number made up of two huge prime numbers, such as an RSA number, than it is to factor a number like this one that has a special mathematical form. But the clock is definitely ticking."Last time, it took nine years for us to generalize from a special to a non-special hard-to factor number (155 digits). I won't make predictions, but let's just say it might be a good idea to stay tuned."

    Reading Lestra's comments, I get the feeling that he has a fairly high degree of confidence that they will succeed in making the leap to a mathematical generalization within a modest time frame.

    Can any security researchers tell me what GPG key length I should be using in the real world to give me a good trade-off between computational simplicity and future security please? I'm only using crypto for email and secure file storage.

    -P

  • by wamatt (782485) on Tuesday May 22, 2007 @04:13PM (#19226639)
    But with layered or onion encryption if you decrypt an outer encryption the result still looks like white noise. So you don't know to whether continue looking or start decrypting the result. Whith TripleDES the "meta key" is easy: Decode the contents 3 times using DES in succession.

  • by goombah99 (560566) on Tuesday May 22, 2007 @04:20PM (#19226763)
    An old theology joke goes like this: "if god is omnipotent, can he make a rock so heavy he can't lift it?"

    But for quantum computers the question has a resonance. Can a quantum computer create prime numbers so large that another quantum computer could not factor the composite?

    I realize that there's always quantum crypto. But for most folks we need to be able to use RSA not some new scheme for the privileged.

  • by kabocox (199019) on Tuesday May 22, 2007 @04:32PM (#19226977)
    What's even stupider is that the calculations themselves serve no purpose. Anyone with an napkin and a pencil can tell you whether or not the calculation is feasible on a given size computer cluster. The expected time to crack in a brute force application of a seive is entirely predictable. So what does cracking one prove?

    People who do this are more than harmless idiots. They waste money.


    I thought it was for the entertainment value. It reminds me of the slashdot stories of DES getting broken "quickly."
    http://en.wikipedia.org/wiki/Data_Encryption_Stand ard [wikipedia.org]
    http://en.wikipedia.org/wiki/EFF_DES_cracker [wikipedia.org]

    I wonder if there is an open source hardware project devoted to building purpose built encryption crackers. I'd think that most governments would spend atleast a million on encryption breaking per year. Heck, most large corporations could afford million dollar encryption breaking projects if they could see a ROI. I could see industrial espionage units privately having this tech.
  • This has already been done as early as 10 years ago.

    I was working in Eastern Europe on a now unclassified project, working against a low budget illegal foreign intelligence agency. They were selling and distributing porn CD's and DVD's with thousands of pictures, one or more of which would contain an encrypted stenographic message. Their contact would purchase the DVD at one of hundreds of little markets, and decrypt the proper image(s).

    It was really quite a good plan. Not only were there many possible valid messages to one or more agents, but there were also an unknown number of false messages, they even may have even been all false messages that could only be put together by inference. However, since they were encrypted with PGP, we never were able to break that particular system before I left the project.

    The real genius of the plan was that it brought them in some much needed cash as well.

  • I don't think there are any good estimates of the computing power of the NSA. I suspect everything, up to and including their power bill, is classified; you'd just be getting somebody's conjecture.

    I'm not even sure that it's really raw 'computing power' that you'd want to try and assess, anyway; I was thinking about something like a novel way of factoring general numbers very quickly, something that could be implemented in specialized hardware. That doesn't seem too outside the NSA's traditional forte -- they have some good mathematicians and probably have relationships with hardware companies that would let them source a lot of (odd) stuff without anyone noticing.

    I do think it's interesting to note that of the algorithms listed as part of the NSA's "Suite B" [wikipedia.org] Good-Housekeeping-seal-of-approval list, all the PK systems are based on elliptic curves, and not prime factorization, for the trapdoor function.
  • by andy_t_roo (912592) on Tuesday May 22, 2007 @09:49PM (#19231075)
    actually, although quantum computers can theoretically factor huge numbers trivially, quantum computers are _extremely_ sensitive to error, and as such the result is statistical, with the chance of a right answer decreasing with the size of the problem. To factor a RSA prime would need about a 1 million qbit computer (the extra bits are needed for error correction), and at the moment a 10 qbit quantum computer hasn't been constructed, and current design principles don't scale well beyond about 1k qbits. as the hardness of building a quantum computer is proportional to the number of qbits it has to process (due to having to shield it from all outside interference) it is still quite feasible to increase the size of the prime to a length where it is still 'hard' to break, although increases in technology would mearly give you years of security, rather than the "practically forever" security that RSA currently has.
  • AI encryption? (Score:2, Interesting)

    by caller9 (764851) on Tuesday May 22, 2007 @11:15PM (#19231685)
    http://science.slashdot.org/article.pl?sid=07/02/0 8/1355255 [slashdot.org]

    Halfway down UbuntuDupe says:
    "No. If I needed to give someone in China the new encryption key, I'd simply put my own lock -- which only I have the key to -- on the suitcase. Then I'd ship it to him. Then he'd put his own lock on it (i.e., now it has my and his lock), and ship it back. Then I'd remove my lock and ship it to him. Then he'd remove his lock and open it."

    Replace "new encryption key" with data. Replace later references to key and lock with OTP.

    Works No?

The meat is rotten, but the booze is holding out. Computer translation of "The spirit is willing, but the flesh is weak."

Working...