Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science Technology

A Suite of Digital Cryptography Tools, Released Today, Has Been Mathematically Proven To Be Completely Secure and Free of Bugs (quantamagazine.org) 194

By making programming more mathematical, a community of computer scientists is hoping to eliminate the coding bugs that can open doors to hackers, spill digital secrets and generally plague modern society. From a report: Now a set of computer scientists has taken a major step toward this goal with the release today of EverCrypt, a set of digital cryptography tools. The researchers were able to prove -- in the sense that you can prove the Pythagorean theorem -- that their approach to online security is completely invulnerable to the main types of hacking attacks that have felled other programs in the past. "When we say proof, we mean we prove that our code can't suffer these kinds of attacks," said Karthik Bhargavan, a computer scientist at Inria in Paris who worked on EverCrypt.

EverCrypt was not written the way most code is written. Ordinarily, a team of programmers creates software that they hope will satisfy certain objectives. Once they finish, they test the code. If it accomplishes the objectives without showing any unwanted behavior, the programmers conclude that the software does what it's supposed to do. Yet coding errors often manifest only in extreme "corner cases" -- a perfect storm of unlikely events that reveals a critical vulnerability. Many of the most damaging hacking attacks in recent years have exploited just such corner cases.

This discussion has been archived. No new comments can be posted.

A Suite of Digital Cryptography Tools, Released Today, Has Been Mathematically Proven To Be Completely Secure and Free of Bugs

Comments Filter:
  • by olsmeister ( 1488789 ) on Wednesday April 03, 2019 @10:29AM (#58378242)
    Until it's installed on a computer that uses an Intel (tm) processor, that is....
    • by raymorris ( 2726007 ) on Wednesday April 03, 2019 @10:49AM (#58378414) Journal

      That's true. One can prove that a particular function is correct, that their code is correct. In this case, library code.

      Proving the CPU hardware and microcode is a separate step. Proving the code that USES their library is yet another thing.

      All of these can be done. It's just expensive, so you use the simplest thing that will get the job done - prove a little MCU used as a cryto co-processor, not a complex Intel CPU.

      • by skids ( 119237 ) on Wednesday April 03, 2019 @11:08AM (#58378550) Homepage

        Proving the code that USES their library is yet another thing.

        That's the rub. There will be some wrapper written to make parameter exchange automatic, or facilitate finding a VPN gateway, or whatnot, and it'll undo all the hard work in a few lines of hastily coded VB.

        There's a hard and fast requisit for establishing a MITM-proof crypto channel: you have to exchange some keying material safely... whether that's through a CA system or a preshared key or whatever, it simply must be done... and the users will choose the software which skips that step because someone assured them it is taken care of automagically, and well heck, that's much easier.

      • by lgw ( 121541 ) on Wednesday April 03, 2019 @11:19AM (#58378648) Journal

        It' impossible to prove that your crypto library is invulnerable to side-channel attacks. It is possible, however, to prove it's not vulnerable to common side-channel attacks. That's not nothing.

        Their marketing hyperbole is so over the top, however, that I wouldn't trust them with anything.

        A big problem in general in software crypto is that it's impossible to prove that the random/entropy source provided by the processor is good. There's no software work-around to that - oh, you can try to use I/O timings and so on, but those can be manipulated. Even if the code that generates the mask that is used in the fab is proven correct, we know the NSA is capable of tampering with the mask between code and fab.

        After the Snowden revelations were understood, paranoid crytpo guys reached the point of "I can't only trust a hardware entropy source that I build myself from components I bought myself in person from a random store." That's not exactly productizable, but it's a fair assessment of the threat of the NSA.

        • by ljw1004 ( 764174 )

          Their marketing hyperbole is so over the top, however, that I wouldn't trust them with anything.

          Really? What is the "marketing hyperbole" you're talking about? The only hypebole I've seen in this case comes from journalists.

        • Not disagreeing, just clarifying.

          Yes you can prove that it is unaffected by known, enumerated side-channel attacks in the processor / chipset.

          You can't prove it would be unaffected by unknown CPU side-channel.

          ALSO you can prove that there are no side channels in the code itself - you can enumerate the state parameters which affect the functioning of the code, both internal and external state parameters.

          The distinction becomes important when you start proving more than one component of the system. If you pro

      • That's true. One can prove that a particular function is correct, that their code is correct. In this case, library code.

        Crypto library code is very linear, no conditionals, just a sequence of math operations. I don't think it's hard to prove correctness.

        This is just marketing wank.

      • Feeling like I'd like to learn today. Could you take a moment and give me some simple examples of what it means to prove an algorithm or whatever it they are saying.

        • TFA actually does a good job of discussing it in layman's terms. It's surprisingly devoid of hype and hyperbole.

          As a starting point that's not TFA, Formal Verification is the sub-field of CS that this is based on: https://en.wikipedia.org/wiki/... [wikipedia.org]

          -Chris

    • Or if it uses any third party APIs or Libraries. Which you know it does. Doesn't matter if internally their program is free of faults if it interacts with something that isn't.
      • That's what the "integrity" part of encryption does: if you can only interact with a program via an encrypted stream, authenticated by public key, and the encryption code to decrypt and verify authenticity and integrity of the message is faultless, then you know that only those authenticated to the system can attack all other parts of the system. Your attack surface and threat actor set is minimized.

    • by Z80a ( 971949 )

      Or installed on a computer that for some reason tend to not execute the instructions correctly, or have a weird reset glitch that for some reason skip just the right instructions to unlock everything.

  • by JoeyRox ( 2711699 ) on Wednesday April 03, 2019 @10:31AM (#58378258)
  • by Anonymous Coward

    It's the 3rd...

  • by Anonymous Coward

    this sounds legit and not like a bullshit slashvertisement! Perhaps it has a nice helping of blockchain, too?

  • by XXongo ( 3986865 ) on Wednesday April 03, 2019 @10:35AM (#58378286) Homepage
    Interesting, but how do you prove the proof?
  • "Our code has no known bugs."

    And we pretend there aren't any unknown bugs.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      From the article:

      Yet while EverCrypt is provably immune to many types of attacks, it does not herald an era of perfectly secure software. Protzenko noted there will always be attacks that no one has thought of before. EverCrypt can’t be proven secure against those, if only for the simple reason that no one knows what they will be.

    • The mathematical proof must be translated into code. You're either dumb or naive if you think that can be done flawlessly.
  • And they are one of the most important attack vectors these days. The whole Spectre attack scenario is "mathematically" non-existent: speculative execution is leaking information (through performance impacts on various caching systems) while purporting to be invisible when discarded.

    • Re: (Score:3, Informative)

      by alvinrod ( 889928 )
      But that's a security issue in the hardware itself. Would you claim that their program is insecure if you had a hardware system that allowed any program full access to the memory of any other program? Of course not, you'd point out that nothing could be considered secure when running on such a system so the question itself is flawed.
    • If a CPU tells you that 1+1=3, does this mean the math is faulty or the CPU failed?

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Yes.

    • They do exist; we know how to treat them using category theory. What else is the IO monad for? We have not done so yet in any meaningful way for the broader class of unintended side channels.

  • which might include the clever/obscure bit of code inserted by a NSA/GCHQ stooge.

    How long before this stuff is banned by various governments or they decide that running it is proof that you are a terrorist or similar.

    • I don't see why the NSA would care. They can already get your passwords anyway. My guess is that, if they wanted to, they would just look at every key I type into my computer and save all that data. Which includes my passwords. They could do it by changing my hardware, or putting cameras in my house, or various known remote attacks. Or failing that they could use a wrench to get the passwords out of me.

      Protecting yourself from state level actors is too damn difficult for an individual to do.

  • PLEASE DOWNLOAD:

    the_real_photo_tools_so_don't_run_AV.eve
  • The SPARK community has been doing this for A Long Time. https://en.wikipedia.org/wiki/... [wikipedia.org]

    I'm reminded of a quote from "Soul of a new machine" where another vendor (IBM?) 'legitimized' minicomputers. Data General ran an ad that said, "The Bastards say 'welcome'!"

  • by ath1901 ( 1570281 ) on Wednesday April 03, 2019 @10:51AM (#58378434)

    Here's the interesting bit about what they actually did:

    The platform needed the capacity of a traditional software language like C++ and the logical syntax and structure of proof-assistant programs like Isabelle and Coq, which mathematicians have been using for years. No such all-in-one platform existed when the researchers started work on EverCrypt, so they developed one — a programming language called F*. It put the math and the software on equal footing.

    “We unified these things into a single coherent framework so that the distinction between writing programs and doing proofs is really reduced,” said Bhargavan. “You can write software as if you were a software developer, but at same time you can write a proof as if you were a theoretician.”

    • by Anubis IV ( 1279820 ) on Wednesday April 03, 2019 @11:20AM (#58378650)

      F* (pronounced "F star") was the grade my university would give to students who failed a course, but not just for any failure. The F* grade was reserved for those who cheated on an exam, plagiarized papers, or were otherwise caught engaging in unethical behavior. It was a scarlet letter that went on their official transcript, forever branding them as someone you couldn't trust (or as someone who wasn't competent enough to NOT get caught, depending on your outlook).

      As a result, it's kinda hard for me to shake the feeling that there's something untrustworthy about this software.

  • KreMLin (Score:3, Interesting)

    by dcollins117 ( 1267462 ) on Wednesday April 03, 2019 @10:57AM (#58378474)
    Couldn't help but notice the code is built with a compiler named "KreMLin". Isn't that interesting.
  • by karlandtanya ( 601084 ) on Wednesday April 03, 2019 @11:03AM (#58378504)

    It's possible to be mathematically perfect...in fact, mathematically is pretty much the only way to be "completely" or "perfectly" anything.

    The basic problem with mathematical proofs is that math is an abstraction (the model), and the perfect-and-complete nature of the proof can *only* ever apply to the abstraction. The degree of fidelity to which the model reproduces your particular real situation is another story. Mathematically proving your solution is perfect not meaningful by itself.

    It's a huge red flag the the vendor in this ad is directing our attention to a meaningless mathematical proof of perfection in their product. This strongly suggests to me that there is no valid reason to trust their design.

    Even if we stipulate a "perfect" set of cryptographic algorithms implemented in a mathematically "perfect" set of code, the solution is meaningless without proper implementation and user procedures.

    For example: I've got a great VPN (it uses OpenVPN), but when it fails, all my traffic is suddenly exposed. So I adjust my firewall rules so the only traffic allowed besides that needed to establish the vpn link must go through through tun0. Or use wireguard instead. Until next week when I'm in a factory and I need to talk to some PLCs or I/O modules to configure them, then I turn off the firewall or use the "factory" instead of the "office" setting. Now I have to remember to turn it on again or I won't be protected. etc. etc. etc.

    There are no "magic bullets", and somebody claiming to have one has just saved you the trouble of evaluating them any further.

    • by gweihir ( 88907 )

      The basic problem with mathematical proofs is that math is an abstraction (the model), and the perfect-and-complete nature of the proof can *only* ever apply to the abstraction.

      And that is exactly it. Anybody claiming to have shown any real world properties are mathematically proven is either incompetent or a liar or both.

      The degree of fidelity to which the model reproduces your particular real situation is another story. Mathematically proving your solution is perfect not meaningful by itself.

      It's a huge red flag the the vendor in this ad is directing our attention to a meaningless mathematical proof of perfection in their product. This strongly suggests to me that there is no valid reason to trust their design.

      In fact there is strong reason to _not_ trust them. They are lying by misdirection about the security. They may lie about other things as well.

      Even if we stipulate a "perfect" set of cryptographic algorithms implemented in a mathematically "perfect" set of code, the solution is meaningless without proper implementation and user procedures.

      For example: I've got a great VPN (it uses OpenVPN), but when it fails, all my traffic is suddenly exposed. So I adjust my firewall rules so the only traffic allowed besides that needed to establish the vpn link must go through through tun0. Or use wireguard instead. Until next week when I'm in a factory and I need to talk to some PLCs or I/O modules to configure them, then I turn off the firewall or use the "factory" instead of the "office" setting. Now I have to remember to turn it on again or I won't be protected. etc. etc. etc.

      There are no "magic bullets", and somebody claiming to have one has just saved you the trouble of evaluating them any further.

      Indeed. Run screaming or start laughing. But do _not_ buy the snake-oil they are selling.

      • by jbengt ( 874751 )

        Anybody claiming to have shown any real world properties are mathematically proven is either incompetent or a liar or both.

        Not that it automatically makes them competent or truth-tellers or both, but they haven't done that in TFA. They claim they have mathematically proved that their code is secure against known attacks. Code is math so it is subject to mathematical proof. TFA didn;t include the proof, and I'm not sure that I could follow it, anyway, but a lot of posts here are assuming they claimed per

        • by gweihir ( 88907 )

          They claim they have mathematically proved that their code is secure against known attacks.

          They have a list of all known attacks? Fascinating. Because nobody else has that list, there are just far too many. You are also wrong that code is math in this instance here. Many, many known attacks do use side-channels. And there code stops being math and is very much a physical system executing the code.

          So while they may actually have excluded all that in their paper, what remains after that is pretty meaningless.

    • For example: I've got a great VPN (it uses OpenVPN), but when it fails, all my traffic is suddenly exposed. So I adjust my firewall rules so the only traffic allowed besides that needed to establish the vpn link must go through through tun0. Or use wireguard instead. Until next week when I'm in a factory and I need to talk to some PLCs or I/O modules to configure them, then I turn off the firewall or use the "factory" instead of the "office" setting. Now I have to remember to turn it on again or I won't be protected. etc. etc. etc.

      You could just configure your firewall to exclude private subnets from the rule in the first place. Or if that's too insecure for your needs, add exclusions on a per-IP basis only when you need them (you can even give them an automatic timeout using ipset, so you don't have to remember to remove them). Or have a completely separate interface (USB Ethernet or wireless) which is exempted from the firewall rules, and use that when you don't want to route through your VPN.

      I agree, though, that the "just turn

      • You're missing the point. Google "reification fallacy".

        • Oh, I got your point. It basically boils down to "it doesn't matter how good the security is, I can find a way to fuck it up". No arguments there; I'm sure you can, and I know you're not the only one. I just thought I would offer you a way to avoid fucking it up in that particular scenario. Trying to be helpful rather than disputing your point.

    • The thing is, cryptography is pretty much just simple math, especially compared to most typical user-facing software. Cryptographic algorithms don't have to deal with checking user authentication and authorization at every entry point into the system the way a web site does, they don't have to deal with the types of concurrency issues that databases need to handle, and they don't have to deal with users pressing buttons in an unexpected order that puts the software in a state that the programmers didn't thi
    • Spherical cows are for amateurs.

      Professionals use cubical cows of unit edge length.

      They stack *so* much better . . . :_)

      hawk

    • It's a huge red flag the the vendor in this ad is directing our attention to a meaningless mathematical proof of perfection in their product.

      Note that it's not an ad, they're not a vendor, and EverCrypt is not a commercial product. This work is the result of an academic research project, from a Carnegie-Mellon professor, his PhD student, and a researcher employed by Microsoft.

  • There is no way to "mathematically prove" security of software on the confidence-level of a mathematical poof. It starts with the execution model not being fit to be formally represented. It continues with the used compiler not being verified. Then, formal verification is extremely high effort and infeasible for anything besides a really small code base in practice.

    At the end, this is a lie. And the ones lying must know that.

    • Re:Bullshit (Score:4, Funny)

      by CyberSlugGump ( 609485 ) on Wednesday April 03, 2019 @11:21AM (#58378670)
      Reminds me of the famous Donald Knuth quotation [stanford.edu]: Beware of bugs in the above code; I have only proved it correct, not tried it.
    • https://en.wikipedia.org/wiki/... [wikipedia.org]

      There is a verified compiler, if they made their language compiler with that then your argument falls apart. Care to detail your knowledge outside of this particular article?

      • by gweihir ( 88907 )

        A verified C compiler is nice, but they are talking correctness on the level of a very fundamental mathematical proof. Ordinary code verification does not give you that. You need to verify the compiler against the full formal description of the hardware. That is pretty much impossible.

    • by Nkwe ( 604125 )
      While you probably can't prove that the compiler will do what you think it will do, and you probably can't prove that the CPU will execute the compiled code the way you think it will, could you prove that your source code does not have a specific set of common developer introduced programming errors? A quick scan of the article seemed to indicate that this was the actual claim. While not earth shattering, it does seem useful.
      • by gweihir ( 88907 )

        I agree that his is useful. But it is not "Mathematically Proven To Be Completely Secure and Free of Bugs" by a far cry.

    • Of course there is. Code is executed in a logical way. Saying their code is mathematically free of bugs is both valid and provable. Now whether the compiled result, the platform or hardware allows an exploit to run on the final binary is a completely different argument.

    • by k2r ( 255754 )

      There is no way to "mathematically prove" security of software on the confidence-level of a mathematical poof.

      This - on the code level - is exactly what you learned to do at university when learning functional programming languages.
      Yes, there’s still a lot that that can go wrong on deeper layers‘ but at least for their code they proved correctness.

      • by gweihir ( 88907 )

        Bullshit. You must never have heard of Hoare Calculus or "weakest precondition" calculus. I learned both in CS 102. No functional language required. (I know a few too.)

        But that is not my criticism. When they claim to "mathematically prove" some property of a real object, then the lower layers must be included. And they need to "mathematically prove" that Physics is complete and correct as well. That is impossible as mathematics can only work on abstractions, not the real world. Anybody claiming to have "mat

    • But every time there's a story posted to Slashdot that's about software patents, at least a dozen people start ranting about how software is just math. Are you telling me Slashdot is wrong?
      • by gweihir ( 88907 )

        Software is Math. Software executed on a computer is Physics. Attacks are done against software executed on a computer, not against the software itself.

    • by jbengt ( 874751 )

      There is no way to "mathematically prove" security of software on the confidence-level of a mathematical poof.

      They specifically said in the TFA that they picked Cryptography because it was feasible to mathematically prove the code is secure from known attacks. They also basically said it would be infeasible for them to try it with many types of complex programs.

      • by gweihir ( 88907 )

        They are wrong. This is not possible for crypto code either. When you look at the last few years, many vulnerabilities are information leaks. Spectre and Meltdown are probably the most famous at the moment, but there are many more, for example RSA keys leaking via timing, etc. They can prove absolutely nothing for these attacks. "Secure" most definitely means secure in practice. "Free of bugs", under the assumption the machine is flawless, is something else and that is what they may have done. Of course for

  • "...and no one had to be nailed to anything."

  • by bradley13 ( 1118935 ) on Wednesday April 03, 2019 @11:24AM (#58378690) Homepage

    Formal verification has huge potential, and can (theoretically, at least), be applied to any software that doesn't have side effects (which mainly means: everything is an input or output parameter - no GUIs, no external input/output, etc..). It's really cool to have something significant that has been formally verified.

    That said, formal verification still only takes you so far. There is no way for them to have proven immunity to side channels, or immunity to processor vulnerabilities, or even immunity to brute-force attacks. So this is an important battle, but it doesn't automatically win the war.

    • Intel used to used formal meathods to prove that their CPUs worked. They had to stop proving the entire CPU as they became too complicated to complete in a reasonalbe time. So they just prove the core functions. That is why the Pentium bug was such a shock for them. Here was a bug missed n a fully proven processor. Oops.
  • Maybe learning a single functional language like ML to grasp the concept and a little mathematics should be mandatory for every software developer.
    At least many of the comments suggest this to me.

  • by mrlinux11 ( 3713713 ) on Wednesday April 03, 2019 @12:35PM (#58379130)
    I found this issue on github and it appears they are not as good as claimed, this is a security 101 mistake. "Currently the generated C functions don't check for the proper length of the input arrays. Indeed the user can read it from the original F* code, but when they make a mistake the code still executes and reports success (like in #53 ). Would be possible to add bound checks on inputs and return an error code (like 1 in case of aead_encrypt "
  • by captaindomon ( 870655 ) on Wednesday April 03, 2019 @12:40PM (#58379162)
  • EverCrypt is the first library to be provably secure against known hacking attacks.

    Known hacking attacks.

  • by petes_PoV ( 912422 ) on Wednesday April 03, 2019 @01:35PM (#58379566)
    ... that will cast doubt on every other cryptographic proof.

    If you were going to wave a rag in front of the hacker "bull" then claiming some code was proven secure is just the way to do it.

    • ... that will cast doubt on every other cryptographic proof.

      If you were going to wave a rag in front of the hacker "bull" then claiming some code was proven secure is just the way to do it.

      Sure, and when some clever hacker shows that there are right triangles whose legs can be squared and summed to something other than the square of the hypotenuse, it will cast doubt on every other mathematical proof.

  • Hmm, what's the delta between:
    > Has Been Mathematically Proven To Be Completely Secure and Free of Bugs
    and:
    > The researchers were able to prove -- in the sense that you can prove the Pythagorean theorem -- that their approach to online security is completely invulnerable to the main types of hacking attacks that have felled other programs in the past. "When we say proof, we mean we prove that our code can't suffer these kinds of attacks,"

    Completely, vs narrow. The headline giveth, but the small print

  • Proven software makes assumptions on compiler and hardware sanity. This will be exploited through stuff like RowHammer, Spectre, or Meltdown.
  • For $2.56, who said it? (Sheesh, young'uns these days.)

  • The parts about buffer overruns and other certain common errors were older parts of the system.
    The cool part (and I'll have to read the paper) is how they formalized timing to prove against timing attacks.

    For those curious, this is all based on the Curry-Howard correspondence, which links type checking of programs to proofs. Given this, you can create a language whose type system allows you to prove certain things about the program. You can then use an SMT to prove that a translation from F* to another lang

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...