Has the Decades-Old Floating Point Error Problem Been Solved? (insidehpc.com) 174
overheardinpdx quotes HPCwire:
Wednesday a company called Bounded Floating Point announced a "breakthrough patent in processor design, which allows representation of real numbers accurate to the last digit for the first time in computer history. This bounded floating point system is a game changer for the computing industry, particularly for computationally intensive functions such as weather prediction, GPS, and autonomous vehicles," said the inventor, Alan Jorgensen, PhD. "By using this system, it is possible to guarantee that the display of floating point values is accurate to plus or minus one in the last digit..."
The innovative bounded floating point system computes two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value. It is fail-safe and performs in real time.
Jorgensen is described as a cyber bounty hunter and part time instructor at the University of Nevada, Las Vegas teaching computer science to non-computer science students. In November he received US Patent number 9,817,662 -- "Apparatus for calculating and retaining a bound on error during floating point operations and methods thereof." But in a followup, HPCwire reports: After this article was published, a number of readers raised concerns about the originality of Jorgensen's techniques, noting the existence of prior art going back years. Specifically, there is precedent in John Gustafson's work on unums and interval arithmetic both at Sun and in his 2015 book, The End of Error, which was published 19 months before Jorgensen's patent application was filed. We regret the omission of this information from the original article.
The innovative bounded floating point system computes two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value. It is fail-safe and performs in real time.
Jorgensen is described as a cyber bounty hunter and part time instructor at the University of Nevada, Las Vegas teaching computer science to non-computer science students. In November he received US Patent number 9,817,662 -- "Apparatus for calculating and retaining a bound on error during floating point operations and methods thereof." But in a followup, HPCwire reports: After this article was published, a number of readers raised concerns about the originality of Jorgensen's techniques, noting the existence of prior art going back years. Specifically, there is precedent in John Gustafson's work on unums and interval arithmetic both at Sun and in his 2015 book, The End of Error, which was published 19 months before Jorgensen's patent application was filed. We regret the omission of this information from the original article.
Pi (Score:2, Funny)
So, what is the last digit of Pi?
Re:Pi (Score:4, Funny)
Re: (Score:3)
The question is (Score:2)
what does "MOUNT TAPE U1439 ON B3, NO RING" mean?
is the answer who gets the last slice of Pi?
Re: (Score:2)
What was in tape U1439? The rest of that makes sense.
O, yes, was it 7-track? 9-track? or 6250?
Re: (Score:2)
6250 Bits per inch was an IBM standard on 9 track tapes from 40 years ago. The 9th track was a parity bit iirc.
Re: (Score:2)
I believe the question was PC LOAD LETTER.
Re: (Score:2)
In binary, 1 of course
Re: (Score:3)
Or maybe 0. There's some uncertainty in the last digit.
Re: (Score:3)
It's easy to refute this claim. The bible is wrong since it's just a book written by people a long time ago.
No, that's not how you refute a claim. Just because something is old and written by people does not mean it is wrong. (Of course, that doesn't mean it's right either.)
pi = 3 is wrong because it is possible to calculate the ratio of the diameter and circumference of a circle mathematically. And the answer is not 3.
Re: Pi (Score:3)
I'll bite. When you say inner and outer diameter, what do you mean? I somehow doubt you're referring to the diameters of something like a washer.
Re: (Score:2)
How, exactly, do you base a digital number system on anything except a natural number greater than one?
Re: (Score:2)
That is not how bases work. So how do you count with this new system, where "1" is an irrational number (and so is 10, "3.18" is just an approximation. In fact, *every* integer is irrational!).
Re: (Score:2)
You can count as many pi as you like.
Re: (Score:2)
There's plenty of extant work showing how an irrational radix works, though I wouldn't say most of it is terribly serious. Probably the most often cited example is phinary [wikipedia.org].
Admittedly, phinary does have the advantage that non-negative integers all have a terminating exact representation in it, which base-pi does not. But this thread isn't about suitability; it's about possibility. And base-pi is certainly possible.
In fact, *every* integer is irrational!
This statement is wrong in more than one way. First, representation does not determine whether
Re: (Score:2)
pi in base pi would be 10 (same as 8 in base 8 is 10 and 2 in base 2 is 10) so there's an argument that the last digit is 0.
floating point has many problems (Score:4, Insightful)
the "bounds" also have the same issue, it's making the problem smaller but not eliminating it
Re: (Score:3)
If done right, this works. It is also more than half a decade old: https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
That should be "century". It was discovered around 1950.
Re: (Score:2)
more accurate to say, if done right it makes things better.
more bits also makes things better, quad precision floats are supported either in software (e.g. Fortran 2008 including gnu Fortran, Boost libraries) or hardware such as Power9 or Z series mainframes. Sparc V8 and up defines hardware but no one has implemented it yet.
Re: (Score:2)
I agree. It also depends on what you need. If, for example, you must be able to compare for ordering accurately and must know when you need extra effort, interval arithmetic is the way to go. If, on the other hand, you just need a small error, more precision in everything is the way to go. It is always good to have different tools with different strength and weaknesses available and float calculations are no exception.
Re: (Score:2)
> more accurate to say, if done right it makes things better.
Yup.
A few years back one of the astronomy researchers where I work came to see us oiks in the IT department with a perplexing problem:
Programs which worked perfectly happily on 32-bit OSes were giving wrong answers on 64-bit ones, even on the same hardware (linux but it was the same on windows, we checked)
After a bit of head scratching and some digging we found out what the problem was: _both_ OSes were giving wrong answers.
I'm sure some of y
Re: (Score:2)
the "bounds" also have the same issue, it's making the problem smaller but not eliminating it
This. You'll still have error propagation as you combine numbers in mathematical calculations.
I haven't read the patent, but I don't see how they can get around that. The error on a calculation will grow into the more significant digits, even if you get fancy with the handling of the last digit's error.
Re: (Score:2)
It can automate the tracking of floating-point uncertainty, so that when the answer 42 pops out, you have a good indication that your result is only accurate to plus or minus 2 to the 19th power. There are a lot of operations (for example, dividing by a very small number) where the magnitude of potential errors explodes.
Built-in error bars (Score:5, Informative)
For those of you too lazy to read the summary, their method is that instead of using one floating point value, they use two and say the real answer is between those two. If the two floats are consistent when rounded to the requested precision, it declares the value correct. If they differ, it gives an accuracy error.
So, for only twice the work and a little ovehead on top, this process can tell you when to switch to a high precision fixed-point model instead of relying on the floating point approximations.
Re:Built-in error bars (Score:5, Insightful)
Re:Built-in error bars (Score:5, Insightful)
Re:Built-in error bars (Score:4, Interesting)
The reason why it is not "twice the work" is because a crude interval approach only works for linear equations. Consider: y=x^2, for small x. If the lower limit is x=-0.1, and the upper limit is x=0.1, y=0.01 in either case. However, consider the case where the actual x=0, with the actual y=0. It can be seen that for -0.1 < x < 0.1, y is outside the predicted range.
For many simple systems of equations, it is possible to get good error analysis by using some Calculus. Specifically, multiply the expected error in the inputs by the derivative (or an upper bound on the derivative).
Unfortunately, most of the people working in this area are solving complex systems of equations, often involving large matrices. This makes the problem difficult to detect with computationally fast approaches. Some people use Monte-Carlo simulations to get estimates of the likely error. These situations require far more computations than double (often thousands of runs). It is also possible to do some serious error analysis to determine when the linear interval analysis applies, and when it doesn't. However, this also requires far more calculations.
Re:Built-in error bars (Score:4, Interesting)
No special hardware or patents needed.
Re:Built-in error bars (Score:5, Interesting)
Every time you multiply two floats you lose a digit of precision. It's a little more complicated but that is the essence.
The butterfly effect was discovered with weather simulations. They saved their initial data and ran it again the next day -- and got a different result, which is impossible.
Turns out they only saved the initial conditions to 5 digits and not the entire float, or what passed for it in the 1970s.
Lo and behold! The downstream numbers, far from returning to the same value, diverged wildly. And no matter how small the difference, it always diverged.
Up until then, scientists had believed small differences would get absorbed away in larger trends. Here was evidence the big trends were completely dependent on initial conditions to the smallest detail.
That's why one stray photon would screw up time travel -- any difference whatsoever would cause the weather to be different in about a month and soon different sperm are meeting different eggs, and the entire next generation is different.
So any time you do more than a trivial number of float multiplies, you are in a whole different world. This is ok if you are looking for statistical averages over many, but miserable if you want to rely on any particular calculation.
Re: (Score:2)
That's why one stray photon would screw up time travel -- any difference whatsoever would cause the weather to be different in about a month and soon different sperm are meeting different eggs, and the entire next generation is different.
s/would/could/g
Re: (Score:2)
s/would/could/g
Either events are preordained and one photon doesn't matter that much, or the world is mechanistic and one photon will change everything.
Re: (Score:2)
Either events are preordained and one photon doesn't matter that much, or the world is mechanistic and one photon will change everything.
Or it's both and you can't tell until you open the box.
Re: (Score:2)
Or, in other words, a stable system vs a chaotic system.
It appears that our world is fundamentally chaotic, and that single tray photon causes a sphere of changes that expands with the speed of light.
Re: (Score:2)
Every time you multiply two floats you lose a digit of precision. It's a little more complicated but that is the essence
Not a whole decimal digit. Just one bit. Right?
Re: (Score:2)
They do, in nature, just not in computers that treat every result as exact instead of fuzzing the results appropriately.
Re: (Score:2)
Up until then, scientists had believed small differences would get absorbed away in larger trends.
They do, in nature
Source? My understanding is that small differences lead to huge differences in nature. I'd imagine that a Pachinko game is a good illustration of that - two different balls with minutely different starting positions/velocities will follow significantly different paths eventually. It's only when you look at the statistical averages of significantly numerous paths that a pattern/trend emerges. In this way, nature is exactly like computer calculations.
Re: (Score:2)
That's why one stray photon would screw up time travel -- any difference whatsoever would cause the weather to be different in about a month and soon different sperm are meeting different eggs, and the entire next generation is different.
This is why I'm positing complex, hyperbolic time when I invoke "time travel" (which it only sort of is since it still doesn't involve going backward, just altering the angle at which you proceed forward). Even if you could figure out how to swim upstream, you still could never return to a previous point because the hyperbolic nature of the complex time plane magnifies the inevitable error on the complex axis. You could go back to 1933 and kill Hitler, but it would be in someone else's timeline, thus no gra
Re: (Score:2)
The time bit isn't the story though. It's my deus ex machina to get the people to where the story happens and not a lot more. Thus, I don't mind if the idea is not new. It's nice to see it's somewhat credible-sounding, and I don't have any entropy violations because there is no backward time travel -- it will be implied more than stated that it is still considered impossible since the tech level will only be 20 years from now. The story is more "Gilligan's Island, in space, with monsters".
Re:Built-in error bars (Score:4, Insightful)
This is also known as Interval Arithmetic, vintage ca. 1950 and available in many numerics packages. Just putting known algorithms in hardware does not make them anything meriting a patent.
Re: (Score:2)
Actually, implementing the logic gates for it can very well be worth a patent. Since there doesn't seem to be any prior art for actually implementing it with logic gates in a quick patent DB search, that definitely makes it a justifiable patent.
Re: (Score:2)
In most countries, patents require the invention not to be "obvious to anyone sufficiently skilled in the art". Only in America is "not blatantly obvious to a blithering idiot with some $$$" the criterion.
Re: (Score:2)
Indeed. It is time this US-based nonsense stops and patents are only granted for things that have more merit than standard straight-forward engineering work.
Re: (Score:2)
The best patents are for things that weren't obvious beforehand but are obvious afterwards.
The question becomes "Why had noone implemented it in hardware before?" (ie, had it been suggested and thought too hard?)
Re: (Score:2)
Just because something is "obvious" in math(that is, on the theoretical side) does not mean how to implement it in hardware is obvious. You are making a false equivalence here.
Oh, and making an "obvious" claim in hindsight is both intellectually and morally disingenious. A patent DB search yields me no results on previous attempts to implement this in hardware, thus it is justifiable. There's no prior art for an actual hardware implementation. The theory side doesn't come into it.
Re: (Score:2)
So the newly patented implementation of a 50-60 year old technique that I learned in high school chemistry is a "game changer"?
Re: (Score:2)
If it gets people to pay attention to "interval approaching size of the known universe" 20 multiplies down the road, sure.
Re: (Score:3)
It won't. The technique has been available for 50-60 years and it hasn't yet. To use the new implementation, code will have to be changed and you'd have to be using a processor that supports it.
Re: (Score:2)
So unlike their claim this works works in almost precisely half the realtime speed of traditional floating point arithmetic.
Re: (Score:3, Interesting)
There's flaws in the principle other than the obvious doubles-the-work feature, though. Error 'bars' only show a pa
Re:Built-in error bars (Score:4, Interesting)
That's not the only flaw, of course. If you take a workhorse numeric problem like integrating an ordinary differential equation, interval arithmetic can give you a bound on how accurate the calculation is relative to an infinite-precision calculation, but not on how accurate the calculated solution is relative to the true solution. I'll give you one guess which bound is the one you actually want.
In the case of ODE solving or numeric quadrature, the thing that determines the accuracy of the solution is how well-behaved the function is in the regions that fall between your samples. Neither interval arithmetic nor this "bounded floating point" is going to help you here.
Now having said that, I did read it, and the method described is not quite interval arithmetic. It's a little more subtle than that: the programmer sets the desired error bounds on the solution, and the FPU does something like a quiet NaN if the calculation exceeds those bounds.
And yes, just like with interval arithmetic, all our floating point libraries will need to be rewritten.
That's true, but TFA is talking up the safety-critical aspect, which I suppose is the one place where worst-case behaviour is what you want. TFA also mentions weather simulations, and that's exactly the case where the distribution of solutions is the answer you want, not worst-case bounds.
It's a little bit interesting, but Betteridge's Law definitely wins this round. I can see this as being slightly more useful than existing techniques (e.g. interval arithmetic) in some safety-critical systems, but I'd rather just have finer rounding control on a per-variable basis. We've had SIMD instructions on commodity hardware for almost two decades now, so I don't know why we don't already have essentially-free interval arithmetic within a register.
This doesn't "solve" "the floating-point error problem", as if there's only one problem outstanding. The full-employment theorem for numeric analysts has not been violated.
For those too lazy to view the patent... (Score:2)
Re: (Score:2)
I certainly had to do that when I wrote some financial software years ago. Floating point math, even at two decimals, was just way too error prone. Some some long ints did the trick and the decimal point was inserted when the number was formatted for display.
Re: (Score:2)
I've done 300 bit precision arithmetic using gmp, calculating uniformity bounds for a crypto thing. Computing that with bounded doubles would not give me any more precision.
I would only know it's broke. When you need more precision, more bits is what you need, usually in your mantissa.
Re: (Score:2)
Floating point crypto = You're joking, right?
Floating point evaluation of the bounds on an integer algorithm's uniformity. It's really rather serious.
Re: (Score:2)
Bouncing virtual molecules around in a weather simulation will eat those 256 bits, or digits, in less time than you can blink, and give different results with the 256th bit different in one molecule's position or momentum.
Re: (Score:2, Insightful)
But the differences are meaningless. Substantial, yes; but since differences smaller than the resolution of the universe can lead to meaningful divergence in the simulation, that precision has little value.
Re: (Score:3)
Not if these divergences increase exponentially. Think of a billiard ball: a minute change of its position or angle won't have a noticeable effect if it doesn't interact with other balls. Every time it hits another ball, though, the divergence is amplified.
Re: (Score:2)
And get perfect numbers for everything but irrational numbers, for which there is no easy precise solution anyway.
Would probably be faster too, since the ratios could be two integers.
Yes. I've used rational types in python. When you are doing arithmetic in the rationals, it's nice to use this and get precise answers.
For 10x the work, I use assembly! (Score:2)
We used to use a lot of fixed point math that you could configure to get the range and resolution needed.
It's amazing how well optimized these systems were. Much of the expensive math and rational number problems were invisible because of the internal data representations being carefully planned.
There was a dll generated at build time that provided information to properly convert the internal representation to engineering units. It was very interesting doing the conversions
Divide 63,000,000 by foo, then a
Reinvented interval arithmetic (Score:2, Interesting)
As mentioned in the summary, this sounds no different from the age old interval arithmetic. The reason interval arithmetic never took off is that for the vast majority of problems where error is actually a problem, the bounds on the error become so large as to be worthless. To fix this you still need to employ those specialist numerical programmers, so this doesn't actually get you anywhere.
Re: (Score:2)
If you read TFA, it is a novel variant of interval arithmetic. You can think of this as interval arithmetic using a more compact representation, plus something like quiet/signalling NaNs if a computation exceeds programmer-defined error bounds.
I can see a few use cases (e.g. maybe some safety-critical environments), but it isn't "the floating-point problem".
Re: (Score:2)
Re: (Score:2)
Sorry, but while it could be the programmer's fault, this isn't certain. Different processors have different precisions, unless you are saying he should do the entire thing in software using infinite precision integers.
It's reasonable to claim that if it's still running on the platform it was designed for, that it's the programmer's fault, but I've had things migrated without my knowledge. Usually, admittedly, to a platform that had higher precision, but there have been exceptions. One time I *did* rewri
Re: (Score:2)
Re: (Score:2)
And once, when it was appropriate, I did replace the floating point approximation by an integer based approach. But there are costs in every choice. Usually that's a bad idea, even if it does mean the thing will either work properly or fail noticeably (e.g., out of memory error). But any design makes assumptions about the hardware it's running on. It's usually possible to check, but that's usually not a good approach...or wasn't in the environment in which I was working. If I'd been writing a library o
Re: (Score:2)
Re: (Score:2)
Exact decimal representations only work if all the numbers are exactly representable in a decimal representation of bounded length. That isn't the case with something as simple as 1.0/3.0. The decimal numbers are primarily designed for financial calculations, in which all the numbers are normally set up to only require a few decimal places, and where the rounding methods are fixed. In any situation where we're dealing with numbers that haven't been deliberately picked to be easy special cases, regular f
Re: (Score:2)
Re: (Score:2)
Um, yes. If you use a floating-point number of insufficient precision, it won't work well. People generally know this stuff.
As a competent programmer, I do know that decimal calculations do not do well with transcendental numbers or trigonometric functions, and you should just go with regular floating-point. It's just as accurate (given enough precision), it's faster, and it's probably more predictable.
As you say, the decimal data type supports decimal places exactly. (Duh.) So, what's that useful
Re: (Score:2)
Re: (Score:2)
You can patent Math? (Score:4, Insightful)
The perversions of the US patent system are truly astounding.
Also sounds very much like they re-invented Interval Arithmetic, which was discovered originally around 1950 and has been available in numeric packages for a long time. And, to top it off, the title is lying: Interval Arithmetic does not give you an accurate representation. It just makes sure you always know the maximum error.
Pathetic.
Re: (Score:2)
No.
But this appears to be an implementation of something like Interval Arithmetic inside of a math co-processor. Previous implementations have been in math libraries.
Large scale omissions (Score:3)
Earlier prior art, https://en.m.wikipedia.org/wik... [wikipedia.org]
Patents, implementations etc. Not that there isn’t room for innovation as well as popularization.
Standards efforts (IFIP, IEEE) are ongoing. Yearly conferences in reliable computing, so both the original article and most likely the patent itself gloss over engineer-decades (if not centuries) of work.
back to basics ? (Score:5, Interesting)
In the 1950s, when the public was first becoming aware of computers, computers were considered to be large calculators. They could do math. They could be used by the IRS to compute your taxes or by the military to analyze sensor inputs and guide missiles. Few people could envision a future where computers could manipulate strings, images, sounds and communicate in the many ways that we now enjoy.
But today we have all those unimaginable benefits but one: They can't really do math well. Oh, the irony!
Re: (Score:2)
That depends on what you mean by "do math." If you mean arithmetic and symbolic manipulation, then yes they can. In fact they're damn good at it.
But if you mean possessing a sentience with a motivation to seek out theorems and construct proofs for them, then not so much, but let's wait and see what AI brings us.
Re: (Score:2)
It would be an advance for "AI" to suss when it's appropriate to go down the rabbit hole, into the weeds, into a cpu-intensive series of loops when precision becomes important. The bounds of a series of linked algorithms coupled to its dataset, makes for an accurate day.
This also means unleashing AI, then making it do what all of us have had to do since the beginning of time: show proof. This is where software test is supposed to catch errors, where QA decides that the boundary of inputs deigns the precisio
Not solved (Score:2)
Until someone does this on a chip it's meaningless, and as others have pointed out, the solution isn't new.
"Representation of real numbers accurate" (Score:2)
"to the last digit." Cool. I'd like a representation of pi, please.
Re: (Score:2)
Re: (Score:2)
Please go back to undergraduate computer science and re-learn the difference between accuracy and precision.
"3" is a representation of pi accurate to the last digit. It's just very imprecise.
Re: (Score:2)
"to the last digit." Cool. I'd like a representation of pi, please.
8=========D
Wrong gender.
Perl - bigrat (Score:2)
Re: (Score:2)
The set of rationals is closed under ordinary arithmetic operations. It isn't closed if you add square roots or trigonometric functions. Such rational arithmetic was required in the Common Lisp standard of about 1994, and present in implementations going a good deal further back, long before Perl 6.
Re: (Score:2)
Submitter didn't RTFA (Score:2)
So, they've patented interval arithmetics? (Score:2)
From what little i've read it certainly seems to be the case.
This is both (a) old and (b) simply wrong (Score:2)
(a) The first example of this (decades before Gustafsson's unums) is interval arithmetic.
(b) A trivial counter-example is any kind of Newton-Rapson [wikipedia.org] iteration to calculate the value of a given function: Even though the NR iteration for sqrt(x) or sqrt(1/x) both have quadratic convergence, i.e. the number of correct digits double on each iteration, any kind of interval arithmetic will tell you that the error instead grows to infinite levels.
I.e. the claimed benefits are totally bogus except for really trivial
Accurracy!=precision (Score:2)
People tend to forget that.
This sounds a lot less innovative than claimed (Score:2)
The innovative bounded floating point system computes two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.
This sounds like interval arithmetic.. A.K.A. using a vectorized format for numbers that carries forward an upper and lower bound through each operation, and I wrote a Matlab datatyp
Comment (Score:2)
Yes, but does it also perform speculative execution? If so, do not want!
You mean Intel and AMD? Who use nothing patented? (Score:2)
> any company that chooses to implement it
This patent relates CPU design, for CPUs used in critical applications involving lots of floating point operations. So by "any company" you mean "AMD and Intel".
> It'll never reach any mass adoption.
Right, neither Intel nor AMD would EVER license any patents for use in their processors (eyeroll).
Not for heavy-duty floating point (Score:2)
ARM isn't used for serious floating point work. Very much the wrong tool for the job.
Re: (Score:2)
What about irrationals, which are as numerous as the whole of real numbers?
Approximate them as 22/7.
Re: (Score:2)
"Numerous" is not the best word to use here, because it implies countability. Rational numbers are countable, i.e., you can construct a 1-to-1 mapping with the integers, but any attempt to place the irrational numbers in a 1-to-1 mapping with the integers can be shown to have omitted at least one irrational number.
One can say that the set of rationals and the set of irrationals are both infinite, but the irrationals have a higher infinitude than the rationals. The curious reader should look up Transfinite C
Re: (Score:2)
You've got an answer to the problem this was trying to deal with, but what he actually did was just put bounds on the error. As others have said, this has been known for a long time.
The only thing that's been said which "sort of" justifies this patent is that this is the reduction of a known algorithm (interval arithmetic) to a hardware implementation. Why that's worth a patent I'm not sure, but at least, unlike copyrights, patents eventually expire.
Re: (Score:2)
The only thing that's been said which "sort of" justifies this patent is that this is the reduction of a known algorithm (interval arithmetic) to a hardware implementation.
If you read TFA, it's clear that there are at least two things which can reasonably be called "new": a compact representation of a value-plus-bounds (i.e. an interval), and some way to mark (analogous to quiet NaNs, if I'm reading it correctly) a calculation that may have violated its precision requirements.
Maybe if you're working in a safety-critical field it might help with some calculations, but I can't see how it would help with most numeric problems.
Re: (Score:2)
From the patent:
Using current technology error can be reduced by increasing computation time and/or memory space. The present invention provides this error information within the inventive data structure with little impact on space and performance.
This expresses the key development better. This patent specifies a means by which this error tracking and warning mechanism can be added to floating point HW with little addition of hardware or performance penalty. The operation is not performed twice as other posters here have imagined.
The article's wording definitely misleads as this just provides a means of identifying precision violations efficiently. But, I can imagine that someone could then go further at the assembly level and automat
Re: (Score:2)
IIUC, the existing methods already had a way to mark when the bounds were violated. Compact, however, might well be new. And perhaps the way he marks that bounds have been violated is new...though I wouldn't bet on that...not given what I read in the summary.
For that matter, I'm not expert in the field, but it wouldn't surprise me if some existing library already used an analogous "compact representation". Probably not the same representation, as implementing stuff in hardware usually causes changes, but