Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Math Programming Hardware

Has the Decades-Old Floating Point Error Problem Been Solved? (insidehpc.com) 174

overheardinpdx quotes HPCwire: Wednesday a company called Bounded Floating Point announced a "breakthrough patent in processor design, which allows representation of real numbers accurate to the last digit for the first time in computer history. This bounded floating point system is a game changer for the computing industry, particularly for computationally intensive functions such as weather prediction, GPS, and autonomous vehicles," said the inventor, Alan Jorgensen, PhD. "By using this system, it is possible to guarantee that the display of floating point values is accurate to plus or minus one in the last digit..."

The innovative bounded floating point system computes two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value. It is fail-safe and performs in real time.

Jorgensen is described as a cyber bounty hunter and part time instructor at the University of Nevada, Las Vegas teaching computer science to non-computer science students. In November he received US Patent number 9,817,662 -- "Apparatus for calculating and retaining a bound on error during floating point operations and methods thereof." But in a followup, HPCwire reports: After this article was published, a number of readers raised concerns about the originality of Jorgensen's techniques, noting the existence of prior art going back years. Specifically, there is precedent in John Gustafson's work on unums and interval arithmetic both at Sun and in his 2015 book, The End of Error, which was published 19 months before Jorgensen's patent application was filed. We regret the omission of this information from the original article.
This discussion has been archived. No new comments can be posted.

Has the Decades-Old Floating Point Error Problem Been Solved?

Comments Filter:
  • Pi (Score:2, Funny)

    by Anonymous Coward

    So, what is the last digit of Pi?

    • Re:Pi (Score:4, Funny)

      by LynnwoodRooster ( 966895 ) on Sunday January 21, 2018 @11:50AM (#55972195) Journal
      42
    • by jmccue ( 834797 )

      In binary, 1 of course

  • by iggymanz ( 596061 ) on Sunday January 21, 2018 @11:43AM (#55972163)

    the "bounds" also have the same issue, it's making the problem smaller but not eliminating it

    • by gweihir ( 88907 )

      If done right, this works. It is also more than half a decade old: https://en.wikipedia.org/wiki/... [wikipedia.org]

      • by gweihir ( 88907 )

        That should be "century". It was discovered around 1950.

      • more accurate to say, if done right it makes things better.

        more bits also makes things better, quad precision floats are supported either in software (e.g. Fortran 2008 including gnu Fortran, Boost libraries) or hardware such as Power9 or Z series mainframes. Sparc V8 and up defines hardware but no one has implemented it yet.

        • by gweihir ( 88907 )

          I agree. It also depends on what you need. If, for example, you must be able to compare for ordering accurately and must know when you need extra effort, interval arithmetic is the way to go. If, on the other hand, you just need a small error, more precision in everything is the way to go. It is always good to have different tools with different strength and weaknesses available and float calculations are no exception.

        • > more accurate to say, if done right it makes things better.

          Yup.

          A few years back one of the astronomy researchers where I work came to see us oiks in the IT department with a perplexing problem:

          Programs which worked perfectly happily on 32-bit OSes were giving wrong answers on 64-bit ones, even on the same hardware (linux but it was the same on windows, we checked)

          After a bit of head scratching and some digging we found out what the problem was: _both_ OSes were giving wrong answers.

          I'm sure some of y

    • the "bounds" also have the same issue, it's making the problem smaller but not eliminating it

      This. You'll still have error propagation as you combine numbers in mathematical calculations.

      I haven't read the patent, but I don't see how they can get around that. The error on a calculation will grow into the more significant digits, even if you get fancy with the handling of the last digit's error.

      • by Entrope ( 68843 )

        It can automate the tracking of floating-point uncertainty, so that when the answer 42 pops out, you have a good indication that your result is only accurate to plus or minus 2 to the 19th power. There are a lot of operations (for example, dividing by a very small number) where the magnitude of potential errors explodes.

  • Built-in error bars (Score:5, Informative)

    by Anonymous Coward on Sunday January 21, 2018 @11:43AM (#55972165)

    For those of you too lazy to read the summary, their method is that instead of using one floating point value, they use two and say the real answer is between those two. If the two floats are consistent when rounded to the requested precision, it declares the value correct. If they differ, it gives an accuracy error.

    So, for only twice the work and a little ovehead on top, this process can tell you when to switch to a high precision fixed-point model instead of relying on the floating point approximations.

    • by K. S. Kyosuke ( 729550 ) on Sunday January 21, 2018 @11:47AM (#55972179)
      Balls and intervals have been here for quite some time, though. Not sure that this is merely "twice the work", though.
      • by JoshuaZ ( 1134087 ) on Sunday January 21, 2018 @12:54PM (#55972521) Homepage
        Yeah, I don't see how this isn't just a hardware implementation of interval arithmetic https://en.wikipedia.org/wiki/Interval_arithmetic [wikipedia.org], which is a topic basic enough that I teach aspects of it as a secondary topic to my Calc I students. It is possible there's something deeper here that we're missing but if so, it doesn't stand out.
      • by Cassini2 ( 956052 ) on Sunday January 21, 2018 @01:46PM (#55972789)

        The reason why it is not "twice the work" is because a crude interval approach only works for linear equations. Consider: y=x^2, for small x. If the lower limit is x=-0.1, and the upper limit is x=0.1, y=0.01 in either case. However, consider the case where the actual x=0, with the actual y=0. It can be seen that for -0.1 < x < 0.1, y is outside the predicted range.

        For many simple systems of equations, it is possible to get good error analysis by using some Calculus. Specifically, multiply the expected error in the inputs by the derivative (or an upper bound on the derivative).

        Unfortunately, most of the people working in this area are solving complex systems of equations, often involving large matrices. This makes the problem difficult to detect with computationally fast approaches. Some people use Monte-Carlo simulations to get estimates of the likely error. These situations require far more computations than double (often thousands of runs). It is also possible to do some serious error analysis to determine when the linear interval analysis applies, and when it doesn't. However, this also requires far more calculations.

    • by Anne Thwacks ( 531696 ) on Sunday January 21, 2018 @11:58AM (#55972221)
      In the olden days (1970's) we did the computation in standard and double precision - if the answers were different, then you probably needed to change the method. (The underlying problem is "underflow" - you do not have any significant digits).

      No special hardware or patents needed.

      • by Impy the Impiuos Imp ( 442658 ) on Sunday January 21, 2018 @01:04PM (#55972565) Journal

        Every time you multiply two floats you lose a digit of precision. It's a little more complicated but that is the essence.

        The butterfly effect was discovered with weather simulations. They saved their initial data and ran it again the next day -- and got a different result, which is impossible.

        Turns out they only saved the initial conditions to 5 digits and not the entire float, or what passed for it in the 1970s.

        Lo and behold! The downstream numbers, far from returning to the same value, diverged wildly. And no matter how small the difference, it always diverged.

        Up until then, scientists had believed small differences would get absorbed away in larger trends. Here was evidence the big trends were completely dependent on initial conditions to the smallest detail.

        That's why one stray photon would screw up time travel -- any difference whatsoever would cause the weather to be different in about a month and soon different sperm are meeting different eggs, and the entire next generation is different.

        So any time you do more than a trivial number of float multiplies, you are in a whole different world. This is ok if you are looking for statistical averages over many, but miserable if you want to rely on any particular calculation.

        • That's why one stray photon would screw up time travel -- any difference whatsoever would cause the weather to be different in about a month and soon different sperm are meeting different eggs, and the entire next generation is different.

          s/would/could/g

          • s/would/could/g

            Either events are preordained and one photon doesn't matter that much, or the world is mechanistic and one photon will change everything.

            • Either events are preordained and one photon doesn't matter that much, or the world is mechanistic and one photon will change everything.

              Or it's both and you can't tell until you open the box.

            • Or, in other words, a stable system vs a chaotic system.

              It appears that our world is fundamentally chaotic, and that single tray photon causes a sphere of changes that expands with the speed of light.

        • Every time you multiply two floats you lose a digit of precision. It's a little more complicated but that is the essence

          Not a whole decimal digit. Just one bit. Right?

        • by swilver ( 617741 )

          Up until then, scientists had believed small differences would get absorbed away in larger trends.

          They do, in nature, just not in computers that treat every result as exact instead of fuzzing the results appropriately.

          • by elistan ( 578864 )

            Up until then, scientists had believed small differences would get absorbed away in larger trends.

            They do, in nature

            Source? My understanding is that small differences lead to huge differences in nature. I'd imagine that a Pachinko game is a good illustration of that - two different balls with minutely different starting positions/velocities will follow significantly different paths eventually. It's only when you look at the statistical averages of significantly numerous paths that a pattern/trend emerges. In this way, nature is exactly like computer calculations.

        • by Mal-2 ( 675116 )

          That's why one stray photon would screw up time travel -- any difference whatsoever would cause the weather to be different in about a month and soon different sperm are meeting different eggs, and the entire next generation is different.

          This is why I'm positing complex, hyperbolic time when I invoke "time travel" (which it only sort of is since it still doesn't involve going backward, just altering the angle at which you proceed forward). Even if you could figure out how to swim upstream, you still could never return to a previous point because the hyperbolic nature of the complex time plane magnifies the inevitable error on the complex axis. You could go back to 1933 and kill Hitler, but it would be in someone else's timeline, thus no gra

    • by gweihir ( 88907 ) on Sunday January 21, 2018 @12:09PM (#55972279)

      This is also known as Interval Arithmetic, vintage ca. 1950 and available in many numerics packages. Just putting known algorithms in hardware does not make them anything meriting a patent.

      • by Shinobi ( 19308 )

        Actually, implementing the logic gates for it can very well be worth a patent. Since there doesn't seem to be any prior art for actually implementing it with logic gates in a quick patent DB search, that definitely makes it a justifiable patent.

        • Actually, implementing the logic gates for it can very well be worth a patent. Since there doesn't seem to be any prior art for actually implementing it with logic gates in a quick patent DB search, that definitely makes it a justifiable patent.

          In most countries, patents require the invention not to be "obvious to anyone sufficiently skilled in the art". Only in America is "not blatantly obvious to a blithering idiot with some $$$" the criterion.

          • by gweihir ( 88907 )

            Indeed. It is time this US-based nonsense stops and patents are only granted for things that have more merit than standard straight-forward engineering work.

            • The best patents are for things that weren't obvious beforehand but are obvious afterwards.

              The question becomes "Why had noone implemented it in hardware before?" (ie, had it been suggested and thought too hard?)

          • by Shinobi ( 19308 )

            Just because something is "obvious" in math(that is, on the theoretical side) does not mean how to implement it in hardware is obvious. You are making a false equivalence here.

            Oh, and making an "obvious" claim in hindsight is both intellectually and morally disingenious. A patent DB search yields me no results on previous attempts to implement this in hardware, thus it is justifiable. There's no prior art for an actual hardware implementation. The theory side doesn't come into it.

    • by sjames ( 1099 )

      So the newly patented implementation of a 50-60 year old technique that I learned in high school chemistry is a "game changer"?

      • If it gets people to pay attention to "interval approaching size of the known universe" 20 multiplies down the road, sure.

        • by sjames ( 1099 )

          It won't. The technique has been available for 50-60 years and it hasn't yet. To use the new implementation, code will have to be changed and you'd have to be using a processor that supports it.

    • So unlike their claim this works works in almost precisely half the realtime speed of traditional floating point arithmetic.

    • Re: (Score:3, Interesting)

      by whit3 ( 318913 )

      instead of using one floating point value, they use two and say the real answer is between those two. If the two floats are consistent when rounded to the requested precision, it declares the value correct. If they differ, it gives an accuracy error.

      So, for only twice the work and a little ovehead on top, this process can tell you when to switch to a high precision fixed-point model ...

      There's flaws in the principle other than the obvious doubles-the-work feature, though. Error 'bars' only show a pa

      • by Pseudonym ( 62607 ) on Sunday January 21, 2018 @05:36PM (#55974181)

        That's not the only flaw, of course. If you take a workhorse numeric problem like integrating an ordinary differential equation, interval arithmetic can give you a bound on how accurate the calculation is relative to an infinite-precision calculation, but not on how accurate the calculated solution is relative to the true solution. I'll give you one guess which bound is the one you actually want.

        In the case of ODE solving or numeric quadrature, the thing that determines the accuracy of the solution is how well-behaved the function is in the regions that fall between your samples. Neither interval arithmetic nor this "bounded floating point" is going to help you here.

        Now having said that, I did read it, and the method described is not quite interval arithmetic. It's a little more subtle than that: the programmer sets the desired error bounds on the solution, and the FPU does something like a quiet NaN if the calculation exceeds those bounds.

        And yes, just like with interval arithmetic, all our floating point libraries will need to be rewritten.

        the result of a large calculation is NEVER well-characterized if you propogate worst-cases.

        That's true, but TFA is talking up the safety-critical aspect, which I suppose is the one place where worst-case behaviour is what you want. TFA also mentions weather simulations, and that's exactly the case where the distribution of solutions is the answer you want, not worst-case bounds.

        It's a little bit interesting, but Betteridge's Law definitely wins this round. I can see this as being slightly more useful than existing techniques (e.g. interval arithmetic) in some safety-critical systems, but I'd rather just have finer rounding control on a per-variable basis. We've had SIMD instructions on commodity hardware for almost two decades now, so I don't know why we don't already have essentially-free interval arithmetic within a register.

        This doesn't "solve" "the floating-point error problem", as if there's only one problem outstanding. The full-employment theorem for numeric analysts has not been violated.

    • He specifies the hardware algorithms that can utilize this new representation directly, track the error amount and flag violations with little addition of work. This is not just a representation patent. It is a patent on the algorithms to utilize the representation efficiently.
  • by Anonymous Coward

    As mentioned in the summary, this sounds no different from the age old interval arithmetic. The reason interval arithmetic never took off is that for the vast majority of problems where error is actually a problem, the bounds on the error become so large as to be worthless. To fix this you still need to employ those specialist numerical programmers, so this doesn't actually get you anywhere.

    • If you read TFA, it is a novel variant of interval arithmetic. You can think of this as interval arithmetic using a more compact representation, plus something like quiet/signalling NaNs if a computation exceeds programmer-defined error bounds.

      I can see a few use cases (e.g. maybe some safety-critical environments), but it isn't "the floating-point problem".

  • Comment removed based on user account deletion
    • by HiThere ( 15173 )

      Sorry, but while it could be the programmer's fault, this isn't certain. Different processors have different precisions, unless you are saying he should do the entire thing in software using infinite precision integers.

      It's reasonable to claim that if it's still running on the platform it was designed for, that it's the programmer's fault, but I've had things migrated without my knowledge. Usually, admittedly, to a platform that had higher precision, but there have been exceptions. One time I *did* rewri

      • Comment removed based on user account deletion
        • by HiThere ( 15173 )

          And once, when it was appropriate, I did replace the floating point approximation by an integer based approach. But there are costs in every choice. Usually that's a bad idea, even if it does mean the thing will either work properly or fail noticeably (e.g., out of memory error). But any design makes assumptions about the hardware it's running on. It's usually possible to check, but that's usually not a good approach...or wasn't in the environment in which I was working. If I'd been writing a library o

    • Exact decimal representations only work if all the numbers are exactly representable in a decimal representation of bounded length. That isn't the case with something as simple as 1.0/3.0. The decimal numbers are primarily designed for financial calculations, in which all the numbers are normally set up to only require a few decimal places, and where the rounding methods are fixed. In any situation where we're dealing with numbers that haven't been deliberately picked to be easy special cases, regular f

      • Comment removed based on user account deletion
        • Um, yes. If you use a floating-point number of insufficient precision, it won't work well. People generally know this stuff.

          As a competent programmer, I do know that decimal calculations do not do well with transcendental numbers or trigonometric functions, and you should just go with regular floating-point. It's just as accurate (given enough precision), it's faster, and it's probably more predictable.

          As you say, the decimal data type supports decimal places exactly. (Duh.) So, what's that useful

  • by gweihir ( 88907 ) on Sunday January 21, 2018 @12:06PM (#55972267)

    The perversions of the US patent system are truly astounding.

    Also sounds very much like they re-invented Interval Arithmetic, which was discovered originally around 1950 and has been available in numeric packages for a long time. And, to top it off, the title is lying: Interval Arithmetic does not give you an accurate representation. It just makes sure you always know the maximum error.

    Pathetic.

    • by PPH ( 736903 )

      No.

      But this appears to be an implementation of something like Interval Arithmetic inside of a math co-processor. Previous implementations have been in math libraries.

  • by khb ( 266593 ) on Sunday January 21, 2018 @12:21PM (#55972335)

    Earlier prior art, https://en.m.wikipedia.org/wik... [wikipedia.org]

    Patents, implementations etc. Not that there isn’t room for innovation as well as popularization.

    Standards efforts (IFIP, IEEE) are ongoing. Yearly conferences in reliable computing, so both the original article and most likely the patent itself gloss over engineer-decades (if not centuries) of work.

  • back to basics ? (Score:5, Interesting)

    by swell ( 195815 ) <jabberwock@poetic.com> on Sunday January 21, 2018 @12:38PM (#55972431)

    In the 1950s, when the public was first becoming aware of computers, computers were considered to be large calculators. They could do math. They could be used by the IRS to compute your taxes or by the military to analyze sensor inputs and guide missiles. Few people could envision a future where computers could manipulate strings, images, sounds and communicate in the many ways that we now enjoy.

    But today we have all those unimaginable benefits but one: They can't really do math well. Oh, the irony!

    • That depends on what you mean by "do math." If you mean arithmetic and symbolic manipulation, then yes they can. In fact they're damn good at it.

      But if you mean possessing a sentience with a motivation to seek out theorems and construct proofs for them, then not so much, but let's wait and see what AI brings us.

      • It would be an advance for "AI" to suss when it's appropriate to go down the rabbit hole, into the weeds, into a cpu-intensive series of loops when precision becomes important. The bounds of a series of linked algorithms coupled to its dataset, makes for an accurate day.

        This also means unleashing AI, then making it do what all of us have had to do since the beginning of time: show proof. This is where software test is supposed to catch errors, where QA decides that the boundary of inputs deigns the precisio

  • This is totally theoretical. There is no implementation on silicon.
    Until someone does this on a chip it's meaningless, and as others have pointed out, the solution isn't new.
  • "to the last digit." Cool. I'd like a representation of pi, please.

    • Pi in what time-space environment. Time-space is only flat if there is zero mass energy around - all the way out to infinity. The official value of pi is wrong if there is any mass-energy warpage of time-space. There is no place in the universe where there is no such warping. It's a pretty math construct, but so is a flat space-time. Doesn't mean it exists. Yeah, I know, no one likes a smart-ass - especially if they're right.
    • Please go back to undergraduate computer science and re-learn the difference between accuracy and precision.

      "3" is a representation of pi accurate to the last digit. It's just very imprecise.

  • There's this perl5 module that actually solved this exactly quite some time ago - no compromise at all, but as Damian Conway says, we suck at marketing - if you need a bigrat to solve your problem. Perl 6 (which should have a different name and many now call Rakudo) - has this as a built-in. Rational numbers are kept exact all the way through calculations, period, if built from...rational numbers in the first place (won't help with pi or sqrt 2 of course). One of the best conference presentations ever giv
    • The set of rationals is closed under ordinary arithmetic operations. It isn't closed if you add square roots or trigonometric functions. Such rational arithmetic was required in the Common Lisp standard of about 1994, and present in implementations going a good deal further back, long before Perl 6.

      • Agreed. Just saying something I know about myself solved this long ago (it's an add-on in perl5, builtin in 6). Doesn't surprise me if Lisp had it too - even slower and more memory intensive than perl perhaps. My point was, this "news" didn't even solve it...and if it did, it'd be "old news" as you seem to agree.
  • The original article has a comment by someone who developed the method and released it openly.
  • From what little i've read it certainly seems to be the case.

  • (a) The first example of this (decades before Gustafsson's unums) is interval arithmetic.

    (b) A trivial counter-example is any kind of Newton-Rapson [wikipedia.org] iteration to calculate the value of a given function: Even though the NR iteration for sqrt(x) or sqrt(1/x) both have quadratic convergence, i.e. the number of correct digits double on each iteration, any kind of interval arithmetic will tell you that the error instead grows to infinite levels.

    I.e. the claimed benefits are totally bogus except for really trivial

  • People tend to forget that.

  • The innovative bounded floating point system computes two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.

    This sounds like interval arithmetic.. A.K.A. using a vectorized format for numbers that carries forward an upper and lower bound through each operation, and I wrote a Matlab datatyp

  • Yes, but does it also perform speculative execution? If so, do not want!

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...