Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Math Software Upgrades

LibreOffice Calc Set To Get GPU Powered Boost From AMD 211

darthcamaro writes "We all know that the open source LibreOffice Calc has been slow — forever and a day. That's soon going to change thanks to a major investment made by AMD into the Document Foundation. AMD is helping LibreOffice developers to re-factor Calc to be more performance and to be able to leverage the full power of GPUs and APUs. From the article: '"The reality has been that Calc has not been the fastest spreadsheet in the world," Suse Engineer Michael Meeks admitted. "Quite a large chunk of this refactoring is long overdue, so it's great to have the resources to do the work so that Calc will be a compelling spreadsheet in its own right."'" Math operations will be accelerated using OpenCL, unit tests are being added for the first time, and the supposedly awful object oriented code is being rewritten with a "modern performance oriented approach."
This discussion has been archived. No new comments can be posted.

LibreOffice Calc Set To Get GPU Powered Boost From AMD

Comments Filter:
  • by 140Mandak262Jamuna ( 970587 ) on Wednesday July 03, 2013 @12:34PM (#44177973) Journal
    If your spreadsheet needs a gpu to speed up calculations, you are probably misusing spreadsheets. I know most accountants love the spreadsheet and they make insanely complicated things using spreadsheets pushing it far beyond what these are designed to do. But if you have a spreadsheet that needs this much of cpu time to recompute, you should probably be using a full fledged data base with multiple precomputed indexing.
    • by Russ1642 ( 1087959 ) on Wednesday July 03, 2013 @12:37PM (#44178035)
      Custom database applications are expensive and inflexible. Stop trying to tell people what they can't do with a spreadsheet.
      • by Anonymous Coward on Wednesday July 03, 2013 @12:44PM (#44178175)

        Spreadsheets are all rectangular. That's pretty inflexible. Show me a triangular spreadsheet and then we'll talk.

        • by Kenja ( 541830 ) on Wednesday July 03, 2013 @12:48PM (#44178233)
          Pivot Tables can have three or more axis.
        • An N-dimensional spreadsheet probably wouldn't be too hairy to describe as a mathematical structure; but the UI might get pretty dreadful.

          • Actually, the UI for Lotus Improv was quite nice and won some awards.

            Its (spiritual) successor, Quantrix Financial Modeler seems to be selling well enough, even w/ a $1,495 price point.

            I wish that Flexisheet (an opensource take on this sort of thing) would get more traction.

        • Lotus had a cool object-based spreadsheet system on the Next computer, called Improv [wikipedia.org]. Improv attempted to redefine what spreadsheets were and how they worked, and once you got used to it, it was great. The basic principle was separation of data, views, and formulas (was this pre-figuring MVC?), and individual sheets could be any size - I'm not sure about 'triangular' per se though.

          But alas, Improv never sold well on either NextStep (although it was very popular amongst financial modeling folks, and sold a

        • Spreadsheets are all rectangular.

          That's not necessarily correct. In its core, a spreadsheet is a DAG of computation-performing nodes. The presentation style is merely a historical happenstance.

      • by BitZtream ( 692029 ) on Wednesday July 03, 2013 @12:48PM (#44178245)

        Thats not the issue. If your spreadsheet is SO larger that on a MODERN CPU, its slow ... you're doing it wrong.

        You can make insanely complex, application like spreadsheets, without noticing 'recalc' time. By the time you get to noticing 'recalc' time, you've fucked up.

        Caveat: OO.org is known to have some of the crappiest code in existence, so with the case of Calc, you don't have to make ridiculous spreadsheets to notice recalc time. GPU support won't fix the problem however as its not the math thats the issue, its the shitty logic code filled with stupid crap written by clueless devs that cause Calc to be so slow.

        • GPU support won't fix the problem however as its not the math thats the issue, its the shitty logic code filled with stupid crap written by clueless devs that cause Calc to be so slow.

          Indeed. You really shouldn't need to have to get a gaming GPU to run a spreadsheet. Hopefully

          "and the supposedly awful object oriented code is being rewritten with a "modern performance oriented approach".

          means they intend to address that part, too, and the crappy headline is just being whiz-bang. If they're dumb enough that they think throwing inappropriate hardware at the problem is a solution... well, they're too far from the vicinity of the US Pacific coast...

          • Indeed. You really shouldn't need to have to get a gaming GPU to run a spreadsheet. Hopefully

            If you are doing trivial calculations then you are probably right. However many of us do more with spreadsheets than making grocery lists. There are quite a few [wikipedia.org] problems that benefit from parallel processing. Since the GPU is probably sitting mostly idle if you have a spreadsheet up, why not do something useful with it?

          • by equex ( 747231 )
            Yeah we said this about gpu rendering of the UI as well....
          • If they're dumb enough that they think throwing inappropriate hardware at the problem is a solution

            What inappropriate hardware? OpenCL, for example, is quite flexible at allowing you to distribute problems to appropriate computing nodes in a heterogeneous system.

        • by tibit ( 1762298 )

          Probably they had some developer mobility betwen Star Division and SAP :/ /me ducks and runs

        • Appropriate tool use (Score:5, Interesting)

          by sjbe ( 173966 ) on Wednesday July 03, 2013 @01:41PM (#44179059)

          Thats not the issue. If your spreadsheet is SO larger that on a MODERN CPU, its slow ... you're doing it wrong.

          It is a relatively trivial matter to make calculations on a dataset slow regardless of the tool used. I work with datasets and related calculations all the time that would make for slow calculations if you hand coded them in assembler. The mere fact that it is slow in a spreadsheet as well has nothing inherently to do with it being worked on in a spreadsheet. Now if the spreadsheet can't handle 65K rows by 65K columns then it shouldn't offer that size table as an option. But most can handle datasets that size and larger without too much trouble. For rapid data modeling and ad-hoc analysis a spreadsheet can be pretty hard to beat.

          When people go wrong using spreadsheets it's usually one of a few ways. The one I see the most is when they take what should be a prototype analysis and turn it into a production tool. If you need to put a bunch of buttons and other interface tools on a spreadsheet THEN you are doing it wrong. The second is when they try to take analyzed data involving more than 3 dimensions. While it can be done it rarely is a good idea. Another I see is if they try to have more than one person working on the spreadsheet. If the dataset is truly huge or you require multi-user access or you need to interface with other applications then by all means use something other than a spreadsheet.

          • Now if the spreadsheet can't handle 65K rows by 65K columns then it shouldn't offer that size table as an option.

            It's not just a matter of mere number of cells. If the cell is "=A1*B2" and A1 and B2 are natural numbers, it probably doesn't pay off. But what if they are 2048x2048-sized matrices (for those spreadsheet applications that allow for that)? This way, you can either use more active cells, OR you can use more complicated calculations in the individual cells.

      • by jellomizer ( 103300 ) on Wednesday July 03, 2013 @01:22PM (#44178741)

        Spreadsheets are good for "throwaway applications" you need to do these calculations fast or gather data, and after a few weeks you don't need it anymore.
        If you are going to be following a process with a fairly rigid data sets. You are going to be better off spending the time and money to make a real application with a real database with it. That way the rigidness is to your favor to prevent incompatible creep, and allow for future data gathering abilities.

        Using Spreadsheets for your application needs works but it is very flimsy and over the long run you will be spending a lot more time fixing your mistakes (say a bad sort) Or a mistime change and save, or just the wrong click of your mouse you messed up a lot of data.

        • The reality (sad ?) is that a huge percentage of financial modeling by the investment banks is done on Excel. Every trader has their own custom versions. Some flash crash-type events have been traced back to bugs in the spreadsheets. Excel makes it very easy to build a useful financial model very quickly, without a lot of 'programming' - although it is still programming, just don't tell those guys. (It's been pointed out elsewhere here that Lotus Improv and its descendants were/are better, and the desce

      • Sure, it's all easy and fun until something like this happens: http://theconversation.com/the-reinhart-rogoff-error-or-how-not-to-excel-at-economics-13646 [theconversation.com]. (I'm not saying that errors do not happen with databases, but the fact that the logic in your code is written in one bazillion copy-and-pasted formulas makes it very, very easy to screw up something. And it makes it impossible to write proper tests.)
        • That's actually not a problem with the concept of a spreadsheet application, that's a problem with one particular spreadsheet application that has a horrible interface.
      • by orlanz ( 882574 )

        For many of the situations that the parent is talking about, this is not true. Spreadsheets with massive business logic are extremely expensive and very inflexible, more so than DB apps. Just no central group/organization reviews, audits, and tallies these costs like they do for developed applications. Therefore, people assume the spreadsheet is cheaper. Do actual IT audits where these things fall into scope... and you quickly realize just how ridiculously risky the entire deck-of-cards-business is runn

    • by buchner.johannes ( 1139593 ) on Wednesday July 03, 2013 @12:39PM (#44178089) Homepage Journal

      I agree. Also, if you rewrite structured code into a "performance oriented approach", you are doing it wrong.
      Write code so it is easy to understand. Then compilers should understand how to make it fast.
      This can only come from people who think code is for machines. Code is for humans to read and modify.

      • Then compilers should understand how to make it fast.

        Should but often don't.

      • Write code so it is easy to understand. Then compilers should understand how to make it fast.

        Could a compiler have come up with the fast inverse square root [wikipedia.org]?

        I once got a 10% speed increase just by moving a pointer offset increment. The compiler missed that one.

        • No, because then it would have been giving the wrong answer most of the time. The fact that the wrong answer is good enough in certain situations is not something the compiler would be able to determine.
        • Could a compiler have come up with the fast inverse square root?

          The answer is "probably yes". Granted, you'd have to design a compiler of your own - a very specialized one, one that would take numerical approximations and algebraic identities into consideration - but it's definitely not impossible, it's just that nobody probably felt it necessary to do so. There are also techniques out there for exploratory automatic programming (for example, genetic programming and superoptimization) that might help you with the problem.

      • I agree. Also, if you rewrite structured code into a "performance oriented approach", you are doing it wrong. Write code so it is easy to understand. Then compilers should understand how to make it fast.

        I.e., code should be written in high-level descriptive languages, and the compiler should choose the algorithm, so that a tricky-to-understand but much-faster algorithm doesn't show up in the code as written, but shows up in the generated code?

        Not all rewrites-for-performance involve low-level trickiness.

      • by robthebloke ( 1308483 ) on Wednesday July 03, 2013 @01:12PM (#44178601)

        I agree. Also, if you rewrite structured code into a "performance oriented approach", you are doing it wrong.

        Nonsense. One of the joys of C++, is the lack of reflection. This tends to lead apps down the route of wrapping everything into an 'Attribute' class of some description, and wiring those attributes together using a dependency graph. The problem with this (very clean OOP) approach, is that it simply doesn't scale. Before too long, this constant plucking of individual data values from your graph, ends up becomming a really grim bottleneck. If you then run the code through a profiler, rather than seeing any noticeable spikes, you end up looking at an app that's warm all over. If you're in this situation, no amount of refactoring is going to save the product. You're only option is to restructure the

        The "performance oriented approach" is the only approach you can take these days. Instead of having a fine OOP granularity on all of your data, you batch data into arrays, and then dispatch the computation on the SIMD units of the CPU, or on the GPU.

        Then compilers should understand how to make it fast.

        Uhm, nope. Sure, if you happen to have 4 additions right next to each other, the compiler might replace that with an ADDPS. In the case in point however, you'll probably expect a generic node to perform the addition on single items in the table. As such, your "addTwoTableElementsTogether" node isn't going to have 4 floating point ops next to each other, it will only have one. Compilers cannot optimise your data structures. If you want to stand a chance of having the compiler do most of the gruntwork for you, you actually have to spend time re-factoring your data structures to better align them with the SIMD/AVX data types. Some people call this a "performance oriented approach".

        This can only come from people who think code is for machines. Code is for humans to read and modify.

        Bullshit. This can only come from experienced software developers who understand that the only approach to improving performance of a large scale app, is to restructure the data layout to better align it with modern CPUs. There is *NOTHING* about this approach that makes the code harder to read or follow - that's just your lack of software engineering experience clouding your judgement.

        • I've seen similar issues in higher level language based projects as well. For example in a simulation engine, each avatar in the simulation was given it's own thread... Now you add a few dozen avatars in a simulation, and more than a few dozen simulations running on a server, and you see the issue. The solution, have the engine run one thread that calls each avatar's "do-work" method in a loop. Minimal code change, huge performance gains for the server as a whole. Why? because context switching from one
          • Indeed. In my experience the only language that (historically) used more memory and cycles to get things done than OO was LISP - and I think that was largely corrected in later implementations of LISP. I have a project just now that uses a canned, very OO library to process (oddly enough) Excel spreadsheets. One particular sheet, with about 3000 mostly empty rows of data (no formulas), results in running out of memory after 20 minutes and 8 GB. The cause? I think each cell results in multiple separate

          • by jeremyp ( 130771 )

            It really bugs me when people conflate OO with anti-pattern x. In what world does OO mean the same as "use lots of threads"?

      • I agree. Also, if you rewrite structured code into a "performance oriented approach", you are doing it wrong.
        Write code so it is easy to understand. Then compilers should understand how to make it fast.

        Except code can end up going through so many layers of abstraction, with some of those layers doing things in the most inefficient manner possible because terrible assumptions were made.

        Sometimes, you need to plan for both performance and well structured code -- or you can end up writing garbage which makes h

    • So you haven't heard of Microsoft Excel for supercomputers [cnet.com] then? What about the Excel RPG? [gizmodo.com]
    • If you need it you are doing it wrong

      That's begging the question, sort of. Who said anything about needing it?

      If your spreadsheet needs a gpu to speed up calculations, you are possibly misusing spreadsheets.

      FTFY. If a spreadsheet is capable of doing what someone wants, who are you to say it shouldn't be done that way?

      But if you have a spreadsheet that needs this much of cpu time to recompute

      Again with the "need." This isn't being done for the people who need fast spreadsheets. It's being done so all spreadsheets can go faster. Who wouldn't appreciate a spreadsheet recalculating in 0.1s instead of 0.5s?

      you should probably be using a full fledged data base with multiple precomputed indexing.

      Well, now you can draw your arbitrary "this is too slow for spreadsheets" line further away from Calc. That's al

    • by gstoddart ( 321705 ) on Wednesday July 03, 2013 @01:07PM (#44178541) Homepage

      you should probably be using a full fledged data base with multiple precomputed indexing

      Well, you can put together a spreadsheet in a few hours.

      What you're describing is likely months of custom development and design, and a whole new thing to maintain.

      Spreadsheets are popular because they're easily deployed, don't require any extra licensing, and the people who know how to use them can likely do things with them that some of us would be astounded at.

      I know people who use spreadsheets for pretty much everything, because it's available to them readily, and they've been using them for a long time.

      It's all well and good to suggest that they use a full-fledged database -- but in reality, they can probably get something useful in a few days for a fraction of the cost.

      It sounds like in this instance, the code was just horribly inefficient.

      • by tibit ( 1762298 )

        Months to put together a bit of SQL and some front end for it, using, say, oh horror of horrors, Excel? Next you're going to tell me that it takes a man-month to write a hello world.

        • Next you're going to tell me that it takes a man-month to write a hello world.

          Not at all, but I will flat out tell you that I've seen domain specific spreadsheets which have surprisingly little to do with adding numbers, and which if you tried to replace it with a DB application would take you months (or years) to do -- and you'd end up with something you still have to maintain.

          Spreadsheets have the really nice feature of still mostly working when you upgrade the version of the software.

          I'm not saying they'

          • by tibit ( 1762298 )

            I know and it's lamentable. I still don't see what kind of an easy-to-do-in-a-spreadsheet kind of a business "application" would take months or years to do data storage/management in SQL...

            • I still don't see what kind of an easy-to-do-in-a-spreadsheet kind of a business "application" would take months or years to do data storage/management in SQL...

              Who said "easy to do"? I've seen stuff done in spreadsheets which evolved over years, and I know people who do stuff in Excel that leaves my head spinning.

              You'd be amazed at the wacky stuff people can do in Excel. Many of the things I've seen done with it over the years probably evolved over a long time, but the end result is something which isn'

            • I still don't see what kind of an easy-to-do-in-a-spreadsheet kind of a business "application" would take months or years to do data storage/management in SQL...

              The kind a mid-level employee evolves on a daily basis. Duh.

              The task at hand isnt concrete, the task at hand is answering "What if I..."

    • by fermion ( 181285 )
      In the late 80's I developed some relatively complicated models on the first version of Excel. The big concession I had to make was turning autocalc off. The longest process was actually printing the report. And, btw, this including exporting and importing data into a database outside Excel. This was on an original Mac. A few years later I was working with huge telemetry data sets that had to be scrubbed, reorganized, and plotted. I had been on the spreadsheet binge for years, so I started off with so
    • You are correct for business applications. But often what spreadsheets are used for is: "I need this quarterly report figured out for the meeting on Friday and then I'm going to delete it forever." Going out an building a full fledged database for that would be stupid. Having a very complicated spreadsheet that solves a problem isn't bad... using that spreadsheet over and over as part of your business process is.

    • by sjbe ( 173966 ) on Wednesday July 03, 2013 @01:24PM (#44178793)

      If your spreadsheet needs a gpu to speed up calculations, you are probably misusing spreadsheets.

      Or it just means that you have some pretty complicated calculations. More computing horsepower never hurts.

      I know most accountants love the spreadsheet and they make insanely complicated things using spreadsheets pushing it far beyond what these are designed to do.

      I happen to be an accountant as well as an engineer. What pray tell do you think spreadsheets were designed to do? (hint - it involves rapid data modeling) They aren't much use if the only problems you solve are toy problems. Plus they require relatively little training to use effectively. Someone can be trained to solve real world problems MUCH easier than with most other tools. Most of the problems I'm asked to solve are ad-hoc investigations into specific questions. I shouldn't need a four year degree on Comp-Sci to accomplish a bit of data modeling.

      But if you have a spreadsheet that needs this much of cpu time to recompute, you should probably be using a full fledged data base with multiple precomputed indexing.

      I use some rather complicated spreadsheets. A database would be of no advantage whatsoever for 99.9% of what I use a spreadsheet for. Furthermore a database would be a lot slower to develop, harder to update, and require significant user interface development. If I'm crunching sales data or generating financial projections a spreadsheet is almost always the easiest and most useful tool for the job.

      Databases come into the picture when: A) other applications need to interface with the data, B) the dataset becomes truly enormous, or C) the number of dimensions in the data exceeds 2 to 3. Sometimes I use databases. Most of the time they would be a waste of money, brains and time. Frequently when I actually need a database I'll create a mock up of the tables and calculations on a spreadsheet first which lets me work out the structure much more easily.

      While it is certainly possible to use a spreadsheet inappropriately, a spreadsheet should be able to handle a rather large amount of data and calculations before it chokes.

      • by tibit ( 1762298 )

        You should know basic programming upon exiting high school. Yeah, the sad state of K-12 curricula are something to lament another time.

        So, what tool do you use to diff your spreadsheets? How do you ensure that there isn't a bug in a column of otherwise "identical" formulas? How do you ensure that whatever column you've filled with imported data still has this imported data in it? Where's your log that shows that you haven't unlocked some cells by mistake and messed them up "subtly"?

        Spreadsheets provide a se

        • by sjbe ( 173966 )

          You should know basic programming upon exiting high school.,

          Programming has been a significant part of my job in years past. You have no idea what my background is.

          So, what tool do you use to diff your spreadsheets? How do you ensure that there isn't a bug in a column of otherwise "identical" formulas?... blah blah

          There are a multitude of ways to error check spreadsheets. RTFM. There also are plenty of tools to replicate formulas (or formatting or numbers). Furthermore you are inappropriately applying data modeling techniques that typically do not apply to what people need out of a spreadsheet. Spreadsheets aren't the right tool for every job but they are a great tool for many.

          Spreadsheets provide a semblance of productivity and an illusion of efficiency.

          Spoken well and truly like someon

    • Comment removed based on user account deletion
      • by gstoddart ( 321705 ) on Wednesday July 03, 2013 @01:59PM (#44179339) Homepage

        You're a developer. Good for you. Good for me too. But our jobs are not to make patronising unrealistic suggestions to smart people who don't have our particular skillset. Our job is to make it easier for other people to do their jobs. Telling them to hire programmers or run off and learn our skills isn't "making it easier".

        This. A thousand times this.

        Somewhere along the way, our industry has developed a collective mentality "we're smarter than you, and we will give you what we want even if we have no idea of what you need".

        Once you get a little further removed and realize that the stuff we're writing/supporting is intended to help the people who do the real, bread and butter parts of the business -- you start to realize if we're an impediment to them, it's worse than if we weren't there at all.

        They're not interested in some smug little bastard looking down his nose at them because they couldn't possibly do what he does. They're interested in getting their stuff done as quickly as possible.

        I can tell you there is nothing more frustrating and counterproductive than some kid straight out of school who thinks the world needs to bow at his feet and stand aside to allow him to tell them how they should do things. Sadly, I've also met developers who have been in the industry a long time who still act like that.

        In many industries, the people who do the real work of the company have highly specialized knowledge, and software is just a tool. And that tool is either helping them get stuff done, or frustrating the hell out of them.

        Acting like we know better than they do (when we in fact know nothing at all about their domain expertise) is at best condescending, and at worst an impediment and a liability.

      • yes but we are talking about having to gpu accelerate a f'ing spreadsheet here, one a sheet gets to this point you are far and beyond the point where it is reasonable to switch to a more powerful tool. spreadsheets like this are huge amounts of spaghetti code mixed with badly written vb8 macros this is a nightmare for everyone involved. while this does not call for a full fledged compiled app, usually, switching over to using something like MS access or libra/open office base even if it means a few afternoo

    • Except that most lay people would find pivot tables and similar techniques easier to do on spreadsheets than learn database query languages, or whatever it would be that they need to get the job done.
    • If your spreadsheet needs a gpu to speed up calculations, you are probably misusing spreadsheets. I know most accountants love the spreadsheet and they make insanely complicated things using spreadsheets pushing it far beyond what these are designed to do. But if you have a spreadsheet that needs this much of cpu time to recompute, you should probably be using a full fledged data base with multiple precomputed indexing.

      The problem is that unlike MS Office, the ODF format does not typically store pre-calculated values of formals; so when you load a file it has to run all the formulas, etc to generate what the user wants to see. There's nothing wrong with that, it's just a little slower. An easy optimization would be for ODF to store formula results as part of the cell contents - but that's a change to the ODF standard AND the software that uses it.

      Now, as per auto-calculating the spreadsheet and storing the values - tha

    • But if you have a spreadsheet that needs this much of cpu time to recompute, you should probably be using a full fledged data base with multiple precomputed indexing.

      I agree, by that point, you should have definitely moved to MS Access.

  • Clarification (Score:5, Informative)

    by UnknowingFool ( 672806 ) on Wednesday July 03, 2013 @12:35PM (#44177993)
    From the article:

    Calc is based on object oriented design from 20 years ago when developers thought that a cell should be an object and that creates a huge number of problems around doing things efficiently.

    The problem isn't that Calc is object-oriented but was designed such that many things depended on the spreadsheet cell.

    • Re:Clarification (Score:5, Interesting)

      by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Wednesday July 03, 2013 @12:37PM (#44178045)

      Yeah, and it sounds like the GPU angle is really just a hook to get AMD funding. The more important improvements will be refactoring the representation so it doesn't suck in the first place.

    • Re:Clarification (Score:4, Interesting)

      by should_be_linear ( 779431 ) on Wednesday July 03, 2013 @01:17PM (#44178681)
      Cell should be an object even today. Their problem is probably, that Cell object contains something like string object, so creating 1 million of cells meeds million pointers and allocations to million of strings, which is performance killer. What they need to do is: instead of string, put int handler of string into cell, and have all strings in single huge allocated blob (like: StringBlobMap object). Going away from objects to improve performance is rarely good idea.
  • Refactor? APU? (Score:4, Interesting)

    by JBMcB ( 73720 ) on Wednesday July 03, 2013 @12:42PM (#44178139)

    If the refactor is done properly I don't think the OpenCL acceleration would be necessary. Heck, 1-2-3 running on a 486 was pretty speedy.

    • 486, Bah humbug.
      I ran Lotus Symphony on a 286. It was the job that convinced me to go back to school for a career in IT.
      Also, get off my lawn.
      • by Valdrax ( 32670 )

        A 286? Luxury!
        I used to use AppleWorks on an Apple II GS running a 65C816 chip, and that's the system that convinced me that programming was a fun hobby.

    • If the refactor is done properly I don't think the OpenCL acceleration would be necessary.

      They are going to need it for the flight simulator function. [eeggs.com]

  • Am I the only one that notices how crazy that sounds?

    • by robmv ( 855035 )

      OpenCL doesn't mean it will need a GPU but that it can use one if available. OpenCL can use your CPU and there will be performance advantage on those cases too, they can use tuned OpenCL libraries instead of rewriting everything inside LibreOffice

    • On the other hand, massively parallel computation is what spreadsheets are and what GPU is good at.
    • No-one's saying spreadsheets need GPU acceleration. But why shouldn't the GPU be taken advantage of?

    • Am I the only one that notices how crazy that sounds?

      Why should it sound crazy? If you've got some parallel computations to make you'd be a fool not to use the GPU. There are many problems [wikipedia.org] that could take advantage of the extra computing horsepower that are perfectly appropriate to do on a spreadsheet.

      • by 0123456 ( 636235 )

        Why should it sound crazy? If you've got some parallel computations to make you'd be a fool not to use the GPU.

        1. For simple tasks, parallel computing often ends up slower becuase the time taken to transfer data between processors is more than the time taken to do the calculations.
        2. If you need teraflops of performance to process your spreadsheet, you're probably like my friend who used to write novels in Excel.

  • by jabberw0k ( 62554 ) on Wednesday July 03, 2013 @01:00PM (#44178437) Homepage Journal

    Spreadsheets lead the inexperienced, down the garden path of "Oh, this looks easy..."

    At some point you think, Oh, let me just sort this column. And you fail to realize some formula on sheet 27 presumes a linkage between column C on sheet 5 and column F on sheet 13. So now your entire model is garbage.

    In all these decades, hasn't anyone resuscitated Javelin [wikipedia.org] with its time-oriented models, where what looked like a spreadsheet was just a view of the underlying model? "Javelin understands the arrow of time" -- 1985 slogan

  • Why oh why can't the bleeptards at LibreOffice recognize that proper document editing is done in a "Galley View" which MsoftWord refers to as "Draft" (previously "Normal" ) view? Displaying page boundaries, headers & footers, etc is of exactly zero benefit while one is composing the text of the document.
    Personally, I'd like not to see text formatting either (bold, font size, etc) but I can live with that. At least until I find a company that supports LaTex, anyway. For that matter, why couldn't Li

    • Why oh why can't the bleeptards at LibreOffice recognize that proper document editing is done in a "Galley View" which MsoftWord refers to as "Draft" (previously "Normal" ) view? Displaying page boundaries, headers & footers, etc is of exactly zero benefit while one is composing the text of the document. Personally, I'd like not to see text formatting either (bold, font size, etc) but I can live with that. At least until I find a company that supports LaTex, anyway. For that matter, why couldn't LibreOffice (and Micrsoft too) have a twin-pane editor like TexMaker? Do your typing in one pane and observe the fully rendered page in the other as desired?

      grrrrrrumble

      C'est a little off topic, but I so very much agree. Top reason I can't cut the M$Word cord for Writer. Please, LibreOffice people, please?

      • This is why I loved WordPerfect 5.1 so much. Because of the simple text based interface, you didn't spend so much time worrying about how your document looked, and just spent time typing up the actual document. All the features were available from the keyboard which meant that it was faster to do any kind of formatting that you needed to do because you never had to move your hands away from the keyboard.
      • My version of LibreOffice has this (4.0.3.3).

        Menu --> View --> Print Layout toggles the behaviour.

    • by Bert64 ( 520050 )

      LibreOffice refers to this as "web layout", and its right there in the view menu.

      As for why it doesn't work like LaTeX, i guess thats because its aiming at a totally different market... Most people simply don't understand the idea of formatting being separated from content, they just want to lay the page out as they see it - as if they were doing it by hand. Also modern word processors have moved more towards traditional DTP applications, where there is a focus on layout rather than on typesetting a large b

  • It's times like this when I wish I actually had a need for such a thing. If LibreOffice ever allowed me to create prettier graphs like Word does, I'd consider moving on over. As much as Microsoft is hated on around here, Office is pretty damned polished (that isn't to say there are no problems... there are still many that drive me bonkers, but they are software features, not performance and the like).

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...