Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware Science

3D Raytracing Chip Shown at CeBIT 391

An anonymous reader submits "As noted at heise.de Saarland University is showing a prototype of a 3D Raytracing Card at CeBIT2005. The FPGA is clocked at 90 MHz and is 3-5 times faster in raytracing then a Pentium4 CPU with 30 times more MHz. Besides game engines using raytracing there was a scene of a Boeing with 350 million polygons rendered in realtime."
This discussion has been archived. No new comments can be posted.

3D Raytracing Chip Shown at CeBIT

Comments Filter:
  • Hardware encoding (Score:5, Interesting)

    by BWJones ( 18351 ) * on Monday March 14, 2005 @11:12PM (#11940452) Homepage Journal
    FPGA is clocked at 90 MHz and is 3-5 times faster in raytracing then a Pentium4 CPU with 30 times more MHz.

    I am really not surprised at the performance as most anytime you build code into hardware, it is significantly faster. For instance, I used to have a Radius 4 DSP Photoshop accelerator card in my old 68030 based Mac IIci I bought in 1990 that would run Photoshop filters significantly faster than even my much later PowerPC based PowerMac 8500 purchased in 1996 with faster hard drives and more memory.

    The same sorts of benefits can be seen in vector math for optimizations that have been built into the G4 and G5 chips with Altivec [wikipedia.org].

    So, the question is: Can these guys get ATI or nVidia to buy their chip?

    • where is more's law when ya need it!
    • by ghereheade ( 681897 ) on Monday March 14, 2005 @11:21PM (#11940499)
      As an FPGA, no. NVIDIA et al would find it to expensive. But if they can roll their Verilog/VHDL code into an ASIC without to much trouble, it might be cost effective enough to catch the gamer and CAD markets. And as a side benefit, an ASIC should potentially run even faster thus giving even more amazing performance. In fact, without knowing what these guys are up to, I'd bet that is in their business plan.
    • Re:Hardware encoding (Score:5, Informative)

      by foobsr ( 693224 ) * on Monday March 14, 2005 @11:22PM (#11940506) Homepage Journal
      So, the question is: Can these guys get ATI or nVidia to buy their chip?

      They are trying (surprise, surprise).

      From their site (already melted (yes, yes, mirrordot)): We are very much interested in evaluating new ways for computer games and therefore like to cooperate with the gaming industry. Thus if you are in such a position, please send us an email!

      CC.
    • Re:Hardware encoding (Score:4, Informative)

      by TLLOTS ( 827806 ) on Monday March 14, 2005 @11:28PM (#11940534)
      It's doubtful that people from ATi or nVidia would have anything more than a casual interest in such a chip. More than anything with the way GPU's have been heading, this device would be more of a backwards step than a forwards one. GPU's these days are far more flexible than they used to be, and there's every indication that trend will only continue, allowing developers to do what they want with the hardware, rather than being told what they can do with it and having no real choice either way.

      So to sum up, don't expect to see vastly specialised GPU's for raytraycing hitting the market, at least not for the mainstream buyer. It's more likely that we'll see GPU's become more generalised to the point where raytracing can implemented on software. Will they be as fast as a purpose built chip like this? No, more than likely they won't. Will developers be able to do a whole lot more with them? Most definitely, and though that will come at a significant performance penalty for the moment, I think it's the right trade off to make as we should see far more creative uses of hardware put into practice, such as work being done already to use GPU's for something other than Graphics Processing.
      • We already have a generalized chip in a computer; it's called a CPU.
      • Re:Hardware encoding (Score:5, Interesting)

        by mrgreen4242 ( 759594 ) on Monday March 14, 2005 @11:47PM (#11940630)
        I think there may be a market here. Say, for example, that the next generation of Unreal or Doom engine is designed around something like this. The SOFTWARE vendor could potentially, assuming they could get the cost down far enough, offer some sort of PCI or even better USB2/FW hardware accelerator bundled WITH the game.

        Think of it like this... Unreal 4 or whatever the next next gen will be decides to partner up with these guys. They develop an engine that runs at 60fps with amazing graphics, etc. You can buy the USB3 or FW1600 or whatever add-on needed for the game for, say, $50, or a bundle for $75 that has the addon and the game. The development cycle would be much easier as there is only one type of hardware to worry about, and the consumer would win as they could get the new hottness game without having to drop $300 on a new new video card.

        It could also serve as an amazingly effective copy protection scheme. Can't very well play the game without the required accelerator.

        Seems possible to me.
        • I remember the day of hardware keys that would plug into the computer in order to use certain software (though those days were virtually over when I got my first PC).
          They were never very popular then. I can't imagine them becoming so now.
          • Re:Hardware encoding (Score:3, Interesting)

            by Afrosheen ( 42464 )
            Those days are over for the average computer user at home, but it's very alive for corporate users. Alot of CAD and design software requires keys on the workstation AND the server.
          • Re:Hardware encoding (Score:3, Interesting)

            by RichardX ( 457979 )
            The thing with dongles is they're expensive, they're a pain in the arse for users, and they still don't stop a determined cracker.

            At the end of the day, there comes a point where the software checks it's key/dongle/word from the manual/price of fishcakes in japan and asks itself "So, am I allowed to run?" and all you need to do is ensure the answer to that is always "Yes!"
      • by fireboy1919 ( 257783 ) <(rustyp) (at) (freeshell.org)> on Monday March 14, 2005 @11:58PM (#11940683) Homepage Journal
        You're to be describing this as if it's some kind of custom hardware with many limitations.

        This could not be further from the truth. FPGAs are more flexible than any of their counterparts. FPGA stands for "field programmable gate array," and are basically a matrix of memory elements (at the very least latches) connected to gates that configured to be a particular type of gate via a ROM or something similar.

        It's kind of like a chip emulator written in hardware. You may be wondering why we don't use these all the time. First, they're a lot more expensive, bigger, and more power consuming than their one-chip cousins. Second (as if that isn't really enough), they're usually 2-5 times slower than the same logic on a custom chip.

        So the big question is why should we use them? What improvements can they give that normal chips can't?

        The big gain is when you want to optimize the hardware for a specific application and be able to change it. These were used in high end digital video cards to be able to handle whatever kind of signal is actually output by whatever kind of camera you've got (I can only assume this is still the case, but I stopped keeping track about 2000).

        I don't know if the people who wrote this thing take advantage of this idea within their design, but it's a possibility.
        • Hmm, I assumed that they had used a FPGA because it was only their prototype board.

          From what I've learned in one of my digitial system design courses from college, ASIC have an upfront production cost of $1,000,000 to $2,000,000 and a small unit price, while FPGAs have a much lower fixed cost (~ $10/unit).

          So, I imagine they will manufacturer ASIC chips when they get a big sponser, unless they are using the FPGAs dynamic abilities...

        • Another reason that people don't use FPGAs that much in consumer applications is the security of the IP on the FPGA. They are loaded on power-on with a PROM chip and it is a somewhat trivial task to read the entire contents of the FPGA at power-on. This would be a nightmare for companies like Nvidia and ATI, who value their custom hardware.

          Fortunately, there are some [actel.com] companies [latticesemi.com] that are incorporating flash memory on to their FPGAs instead of using the standard [xilinx.com] SRAM. The problem is that flash-based FPGA
    • Re:Hardware encoding (Score:5, Interesting)

      by moosesocks ( 264553 ) on Monday March 14, 2005 @11:47PM (#11940627) Homepage
      unlikely. the current generation of 3d cards are all polygon-pushers. Direct3D/OpenGL are all about polygons. virtually all raytracing is done by the CPU.

      Of course, raytracing produces beautiful results compared to the other methods of 3d graphics, but it is MUCH more expensive in terms of CPU cycles on today's CPUs and non-existant on graphics chips -- the first gfx chips were polygon-based because drawing polygons is indeed easier than raytracing even with specialized hardware. of course, specialized hardware definitely helps polygons as well. my 300mhz/whatever TNT2 can render a scene as fast as the fastest pentiums today can using software rendering.

      all of the big renderfarms rely exclusively on the CPU to do their animations. this could change all that. I for one look forward to seeing the potential this has.
      • by Rothron the Wise ( 171030 ) on Tuesday March 15, 2005 @05:18AM (#11941755)
        Of course, raytracing produces beautiful results compared to the other methods of 3d graphics, but it is MUCH more expensive in terms of CPU cycles on today's CPUs

        This may not be true for very long. The complexity of a scene in a traditional polygon renderer like nvidia's chips scales fairly linearly with the number of polygons in view. Not so with raytracers. They have hierarchical structures to test for which group of triagles a ray may intersect and scales more like O(n log n). They also render _only_ viewable pixels, while overdraw is a major hurdle for traditional 3d cards.

        What this means is that as scenes get increasingly more complex, there is a crossover point where ray-tracing will overtake traditional rendering, and dedicated raytracing hardware ensures that this happens sooner. If you add this to the fact that raytracing lets you have perfectly smooth non-polygonized objects, perfect reflections and other features not easilly replicated by traditional rendering you'll see the incentive.

        A case in point: prman, the renderer used by Pixar is a traditional polygon rasterizer, but Pixar has on occation used BMRT (A renderman compliant ray-tracer) for scenes that require ray-tracing for realism.

        Specifically a scene in A bug's life depicting a glass bottle filled with nuts was renedered using BMRT. Flexible and robust realistic reflection and refraction is solely in the domain of ray tracing. What you saw in Half Life was only cheap tricks which would fail miserably in less constrained scenes.
        • Re:Hardware encoding (Score:5, Informative)

          by Hortensia Patel ( 101296 ) on Tuesday March 15, 2005 @08:57AM (#11942655)
          They have hierarchical structures to test for which group of triagles a ray may intersect and scales more like O(n log n).

          I assume you're talking about kd-trees... these do indeed offer very nice performance characteristics, but they're designed for static geometry. Efficient raytracing for dynamic geometry (moving or deforming objects) is AFAIK still far from "solved".

          If you add this to the fact that raytracing lets you have perfectly smooth non-polygonized objects

          and take away the fact that they don't particularly like the arbitrary triangle meshes that make up the vast majority of real datasets...

          Flexible and robust realistic reflection and refraction

          Yes, "Flexible and robust" is the killer. And not just for refraction/reflection; there's still no fully-general, clean, robust method of shadowing for rasterizers, and it's not for want of trying. Radiosity is a joke. Attempts to get realism out of current rasterization approaches are bodges piled on kludges piled on hacks. It became clear some time ago that the technology was heading up a dead end. Of course, so much has been invested in making that dead-end fast that it's going to be hard to take the performance hit of moving to a better but less optimized approach.

          I suspect we'll eventually end up with a hybrid, rather like current deferred-shading techniques. It'll be interesting to watch it all pan out.
        • There's a few pieces of information here that you're missing, I figured I'd drop in my $0.02.

          while overdraw is a major hurdle for traditional 3d cards.
          Not lately (as of geForce5K+). There are two very tried-and-true methods of getting around this. One is to lay down a depth-only pass first, which allows the hardware to render at ~2x the normal polygon throughput (which is pretty much infinite at this point anyways), and then reject the hidden color pixels *before shading ever occurs*. Also, there is *a s
      • Re:Hardware encoding (Score:3, Interesting)

        by orasio ( 188021 )
        unlikely. the current generation of 3d cards are all polygon-pushers. Direct3D/OpenGL are all about polygons. virtually all raytracing is done by the CPU.

        Are not just polygon pushers since a long time now. You should watch Doom3, maybe.

        For example, with nvidia shaders 2.0, in a nvidia 5900, there's some people who built a photon mapping kind-of-realtime renderer. I'm sure with the new cards, it could be actually realtime.

        For your information, ray tracing is far, far from providing the best results, part
    • by Anonymous Coward
      Ray tracing and the current OpenGL/Direct3D way of rendering images are completely different. With OpenGL the rendering system renders a polygon to the frame/z buffer and can pretty much forget about its existance from that point on, meaning reduced memory overheads. Raytracing on the other hand requires every object in the scene to be accessible to render any pixel, each pixel "ray" needs to be tested against all objects in the scene to see which one it hits. This also causes problems moving from the cu
  • by CypherXero ( 798440 ) on Monday March 14, 2005 @11:13PM (#11940460) Homepage
    For a second there, I thought I hit my head, and I had gone back to the early 90's!
    • But depending on the actual graphics output, not too bad for a rendering device. You're better off to compare it to your video card than your desktop CPU. Even on my par 3Ghz machine though raytracing can be dog slow and certainly not near realtime.

      I can't comment too much since we're suffering from the slashdot effect (can RTFA but the pics aren't loading), but depending on chip size one could imagine this being very useful on a handheld device or something similar. Perhaps a small integrated graphics ta
  • meh (Score:4, Funny)

    by Anonymous Coward on Monday March 14, 2005 @11:14PM (#11940466)
    In other news The Miskatonic University is showing a prototype of a 3D Raytracing Card at CeBIT2005.

    The FPGA is clocked at 666 MHz and is 3.5 billion times faster in raytracing then a 80486 CPU.

    Besides game engines using raytracing there was a scene of Cthulhu awakening in 350 trillion polygons rendered in realtime.

    i.e. Vaporware
  • by Prophetic_Truth ( 822032 ) on Monday March 14, 2005 @11:17PM (#11940479)
    3D Realms upon learning of this new technology has decided to push back "Duke Nukem Forver" until the engine rewrite is completed.
  • by narcolepticjim ( 310789 ) on Monday March 14, 2005 @11:18PM (#11940487)
    Is the Beowulf movie rendered with a cluster of these.
  • oh yeah (Score:3, Interesting)

    by lycium ( 802086 ) <thomas.ludwig@gm ... m minus language> on Monday March 14, 2005 @11:19PM (#11940489) Homepage
    ray tracing will *so* usher in a new era of realtime graphics when we can do something like 10-50m intersections per second.

    it's amazing to me that nvidia have ignored this up until now, their existing simd architecture and memory subsystems can be easily adapted...

    all we need now is consumer push!
    • How, Exactly? (Score:2, Interesting)

      by Musc ( 10581 )
      Would you care to enlighten me as to what exactly ray tracing brings to the table, above and beyond what we already get from a state of the art GPU?

      Only thing I can think of is that ray tracing would
      allow us to replace complicated hacks for shadows
      and reflections with a more natural implementation, but I can't imagine how this will usher in a new era of gaming.
      • Re:How, Exactly? (Score:3, Interesting)

        by The boojum ( 70419 )
        How about non-tesselated geometry? You can have high detail curved surfaces without turning everything into a dense polygon mesh. That in turn lowers the memory and rendering requirements so you can apply those resources to proper detail instead.

        Or how about global illumination lighting effects? Truely emissive surfaces and area lights? As a hobbyist map maker, I would kill to have an engine that supported these; imagine being able to just tag the sky as an emmisive surface and have the entire level li
      • Re:How, Exactly? (Score:3, Interesting)

        by Sycraft-fu ( 314770 )
        Actually, raytraced shadows look like crap, at least in the classical sense. Classical raytracing is done by tracing rays back from the pixels to light sources. Works fairly well, but you find the images don't look right. There's a number of things that are generally problematic but the shadow one is that shadows are hard.

        You have to use a different kind of rendering to get soft shadows and proer reflection. Radiosity is a popular method. Basically you treat all objects as light sources, after a fashion. L
    • Re:oh yeah (Score:5, Interesting)

      by forkazoo ( 138186 ) <wrosecrans AT gmail DOT com> on Monday March 14, 2005 @11:51PM (#11940655) Homepage
      It's not amazing at all. When nVidia started making 3D accelerators, OpenGL was a mature, common API. Direct X was gaining traction. DCC and game programmers were familiar with the immediate mode API's, and were making programs that used them.

      By making a card that rendered in immediate mode, nVidia had, ya know, a market. If they created a raytracing card, they would have needed to invent a new API to run it. They would have been the only ones with a card that used the API. Because they would have had a very small installed base, nobody would have written programs to take advantage of the API. Other companies have made raytracing accelerators. This isn't new. Most of them have not done incredibly well because there is so little actual use for the product.

      Think of it this way... How many programs have you seen written for the 3DFX glide API? So, if you are one of the people who still has a glide card, but it was designed so that it couldn't do OpenGL becuase it used completely different technology, how useful would it be to you?

      Personally, I'd love a card like that, if it was well supported by Lightwave, and had a vibrant developer community, and multiple vendors making cards for the raytracing API, and I was sure it wouldn't disappear soon.
      • So, if you are one of the people who still has a glide card, but it was designed so that it couldn't do OpenGL becuase it used completely different technology, how useful would it be to you?

        Normally I avoid Grammar Nazi posts on slash, but this is completely indecipherable. If you're setting up a hypothetical, use the subjunctive. If you're asserting that Glide-oriented cards don't support OpenGL, you're flat-out wrong; they have full OpenGL support. 3dfx also provided a stripped-down, "Quake only" driver

    • Re:oh yeah (Score:3, Interesting)

      by Spy Hunter ( 317220 )
      Bah. Raytracing is not required for good graphics. Pixar's Photorealistic RenderMan didn't even have raytracing until version 11, which came out *after* Monsters, Inc.

      Raytracers can easily do hard shadows, reflection, refraction, and order-independent transparency. Today's rasterizers can do almost all that too: hard shadows (stencil shadows), and "good enough" reflections and refractions (using environment maps and shaders). Order-independent transparency is a tough one; it can be kludged using shade

  • Impossible! (Score:5, Funny)

    by menace3society ( 768451 ) on Monday March 14, 2005 @11:19PM (#11940491)
    The FPGA is clocked at 90 MHz and is 3-5 times faster in raytracing then a Pentium4 CPU with 30 times more MHz.

    That's ridiculous. Everyone on slashdot surely knows by now that the only reliable way to compare processor speeds across architecures is to compare clock speed!

  • by Anonymous Coward
    of the avi clips!!!
  • Open FPGA? (Score:3, Insightful)

    by Doc Ruby ( 173196 ) on Monday March 14, 2005 @11:21PM (#11940498) Homepage Journal
    Is the SU driver open source? Because it would be fun to see people hacking it to send general purpose netlists to the FPGA, and harnessing it for other HW-accelerated tasks. Maybe loading up all the PCI slots with those boards, pooling the GFLOPS, and tapping them for graphics when needed - leaving the rest for computation.
  • Sweet deal! (Score:2, Informative)

    by dauthur ( 828910 )
    350m polygons is god damned amazing. I'm sure the kids at Havok will find good ways to implement this, and I'm sure Abit, Biostar, etc will too. I wonder how long it will take for them to make a PCI card you can put in as a graphics-card-booster... or maybe even USB? This technology is extremely exciting. It would definitely lessen the load off of g-cards, and drastically improve the framerate.
    • Would the number of lights be a limiting factor though? You'd have to account for each light to get the 'real' shadows right?

      Should run Doom3 pretty well then I suppose...
      • Re:Sweet deal! (Score:5, Interesting)

        by forkazoo ( 138186 ) <wrosecrans AT gmail DOT com> on Tuesday March 15, 2005 @12:28AM (#11940819) Homepage
        In general, yes, lights will be one limiting factor. I'm going to blabber a bit about how complexity grows in raytracing when you move past very simple scenes... Then get to your comment about Doom3.

        In the simplest algorithm, assume only point lights, no spots or area lights. Basically, when you are shading a point, so you can draw it on screen, you trace a ray from that point to each light. (You may limit the lights that are at a distance beyond some cutoff, doesn't matter.) If the ray hits some geometry on the way to the light, it is in shadow for that light.

        So, without reflections, or anything cool, just pointlights, and shadows, you will trace
        S+L*S rays

        where S is the number of scene samples (pixels) that you are shading, and L is the number of lights. The lone S comes from all the rays you trace from the eye point out into the scene in order to figure out which point is visible at which pixel.

        If you have lots of reflections and refractions, that's what can really start to slow things down. At your point being shaded, you have to trace a ray each for the reflection, and for the refraction. If the reflection ray then hits another surface which is reflective, you trace another ray to get the reflected reflection, same with refraction. So, in theory, each sample point can spawn two new rays in addition to the rays for shadow tests, and each of those two new rays can result in two more new rays, etc. You basically have to set some limit to how many times you let it recurse, because two parallel mirror planes would take forever to render accurately.

        But wait, there's more! (it slices, it dices!) Everything really starts to explode when you throw out soft shadows and hard reflections. If you want everything to be nice and soft and smooth, you basically have to trace lots of rays and average the results. So, instead of each recursion in a shiny refractive scene spawning two more rays, it may need to spawn 20 or 200. Assume a max recursion of 5, and 20 rays being generated by each shading point.
        First point traces twenty rays.
        Each of those 20 trace 20 for 400.
        Each of those 400 trace 20 for 8000
        160,000
        3,200,000 shading sample points for the fifth level of recursion, each of which will need to trace rays for each of the lights which might not be casting a shadow on it, possibly many more for soft shadows.

        So, 3.2 million times Lights times soft_shadow_samples times pixels times samples_per_pixel (and believe me, 10 samples each for the reflection and refraction is not very smooth in my experience!)

        A veritable explosion of rays, as I am sure you see. I won't even begin to discuss radiosity, because that's actually slow, and computationally intensive. :)

        Now, we get to the subject of Doom3... I'm not sure this hardware would actually be that well suited to Doom3. You know all the lovely shading effects, with detailed highlights and bump mapping? They pretty much define the Doom3 Experience. That all comes from a technology called programmable shading. Basically, while your GPU is rendering the polygons in the game, it runs a tiny little program that determines the precise shade of black for each and every pixel.

        A raytracing accelerator takes advantage of the fact that ray hit-testing is a very repetative chore which can be done in hardware very efficiently.

        But, as you can see, most of the really interesting rays in a scene are the "secondary rays." The rays that are for reflection and refraction and lighting and such. So, suppose this card calculates a ray, and figures out the point that needs to be shaded. Because the accelerator is all in hardware, for programmable shading like Doom3, it would need to hand-off back to the host processor, which would run the shader code, which would ask for 20 more rays, etc. So, with a fully fixed function raytracer, there would either be annoying limits on what the scene could look like, or you would constantly be going back and forth be
        • by bani ( 467531 ) on Tuesday March 15, 2005 @01:03AM (#11940943)
          afaik the advantage of raytracing is that you are no longer bound by polys though. you can easily have unbelievably complex scenes with little performance impact vs simple scenes. your bottleneck is no longer polys/sec but now rays/sec.

          and iirc raytracing is a very simple thing to parallelize. given the performance they are getting out of their FPGA prototype, I expect this will scale nicely.

          imo raytacing is the obvious future of graphics cards.

          as an aside, a lot of game mechanics is dependent on raytraces for detecting collisions. now if you could use a raytracing GPU to handle that as well, you've offloaded yet more work from the CPU... :-)
  • Performance (Score:5, Insightful)

    by MatthewNewberg ( 519685 ) on Monday March 14, 2005 @11:29PM (#11940540) Homepage

    On the Ray Traced Quake 3 Website it says that runs faster with more computers (about 20 fps@36 GHz in 512x512 with 4xFSAA)

    Assuming that is correct,a normal chip can render Ray Traced Quake 3 like graphics at 2 to 3 fps on a 4GHZ machine which means the Ray Tracing Chip could do it at 6 to 9 fps. This might be real-time for alot of research, but when it comes to games anything less then 15 fps is a joke. I'll be interested when they can hit 30 fps, with more graphics complexity then Quake 3.

    • Re:Performance (Score:2, Informative)

      by Anonymous Coward
      Dude, way to miss the point entirely. First of all, this is a prototype, not a production model. Second, according to TFA [uni-sb.de], "Nvidia's GeForce 5900FX has 50-times more floating point power and on average more than 100-times more memory bandwidth than required by the prototype." In other words, imagine what this baby can do when backed up by today's graphics technology.
    • Re:Performance (Score:3, Insightful)

      by timeOday ( 582209 )
      It seems like raytracing would be ridiculously easy to run in parallel. If they're only 90 mhz chips, they probably don't take much power anyways. They should slap 8 of them on a board.
      • Well... it's not quite that easy. It's one thing to make one 90mhz chip. It's a different story to create a system around 8 of them==with hardware and software==to run in parallel. Maybe there is a reason Intel never made an 8x90mhz setup?
    • Look, if it was production-ready, it wouldn't be research.

      While it's certainly not enough to start playing games, it's a heck of a lot closer than I thought was possible. And there's a lot of tweaking that could be done to speed the process up with present technology. An FPGA [wikipedia.org] is the integrated circuit equivalent of a stack of lego. It makes it possible to build custom hardware without forking out for a custom chip; they are however much slower than such a custom-built. I expect if Nvidia decided to mak

    • This might be real-time for alot of research, but when it comes to games anything less then 15 fps is a joke.

      It probably wouldn't be a huge challenge for them to up their clock rate by 75% or so to get 15FPS. Imagine if they moved it from FPGA into custom silicon with appropriate chip designer voodoo.

  • FINALLY (Score:4, Interesting)

    by Anonymous Coward on Monday March 14, 2005 @11:36PM (#11940569)
    Hopefully, this will help FPGAs to get some much-needed exposure. Their potential is obvious to me, as I think it must be to anyone who's been shown some of what they can do. (For example, this wiki article [wikipedia.org] mentions that current FPGAs can achieve speedups of 100-1000 times over RISC CPUs in many applications.)

    Every time I hear about the latest beast of a GPU from ATI or NVidia, I can't help thinking what a waste all those transistors are for anything other than gaming, and maybe a couple other applications. We should be putting those resources into an array of runtime-programmable FPGAs! Your computer could reconfigure itself at the hardware level for specific tasks -- one moment a dedicated game console, the next a scientific workstation, etc.

    Lest I get too breathless here, does anyone care to inject some reality into this? Are there technological reasons why FPGAs haven't burst into the mainstream yet, or is it something else? Have I misunderstood their potential entirely?
    • Re:FINALLY (Score:2, Interesting)

      by PxM ( 855264 )
      They're still Not Good Enough. FPGAs are faster than running software on a normal CPU, but they're still not as fast as running on pure hardware. While modern GPUs are programmable, they're still dependent on extreme hardware which is basically tons of simple circuits doing the same few operations. FPGAs are used when the system has to be more flexible than just 1) get vertex 2) transform 3) paint. Places like ATI do use FPGA systems when they are designing the hardware since it has faster turnaround time f
      • I don't think he means that we should use FPGAs to do things that dedicated hardware would be much better for. FPGAs would allow today's computers to perform the tasks of many dedicated pieces of hardware, not quite at their level, but fast enough for enough applications to make it worthwhile. Increased versatility my friend, that is the reason.

        An extra reason for the slashdot crowd would be the open source potential. Wide availability would attract developers to open source FPGA projects and could lead to
    • by xtal ( 49134 ) on Monday March 14, 2005 @11:54PM (#11940666)
      Unfortunately the price is an order of magnitude (or two.. or three) too high for FPGAs to really be a consumer tech. The issue I think is an ASIC costs so little in volume, rather than spend all the money on an FPGA design that might be obsoleted next year anyway - a vendor is more likely to commit a design to silicon and then sell that.

      There's also the speed issue - I've spent DAYS of CPU time to get a design syntheized from VHDL for a moderately complicated IC built up from available cores.

      Factor in optimizing floorplans and the like, and you're talking about serious time commitments to optimize the hardware.

      It works; I've been paid to do it in the past; but it's not something I can see in the consumer market for the time being.

      An exciting hybrid is intersting though, putting silicon CPU cores on the same die with an FPGA. They've been around for awhile, and I haven't done any FPGA projects in ~18 months - but I haven't seen any real movement outside of areas where FPGAs are already popular.

      See Open Cores [opencores.org] (no, not sores.. :-) ) if you're interested in this - there is open source hardware out there, some really good designs at that.

    • Re:FINALLY (Score:2, Informative)

      I'm actually on a reconfigurable computing project, which focuses on this, the simple fact is that right now we don't have good methods for generating HDL for a given task, as well as lacking the necessary tools to properly swap tasks into/out of an fpga while maintaining a reasonable communication speed to them. It's being worked on, but it will take time to develop the tools, and even longer before software makers start using them.
    • You need about an order of magnitude more transistors to do something with an FPGA than to design a special chip for some task. That makes it an order of magnitude more expensive, too, and consume much more power. And even then, it won't run as fast as an equivalent hard-wired chip.

      People build special-purpose chips for the mass market because it's the cheapest way of getting the functionality out.

      And as general purpose computational devices, FPGAs are just too hard to program and too inflexible.
  • by poopdeville ( 841677 ) on Monday March 14, 2005 @11:36PM (#11940571)
    This is great! I do work with an animation company, and a couple of these bad boys would seriously speed up our render times. The last video our lead artist did had to be rendered below 720x480 because we didn't have six months or a cluster of G5's. We've also been looking at buying time on IBM's supercomputers, but this might end up being cheaper in the long run.
  • Saarland... (Score:4, Interesting)

    by Goonie ( 8651 ) <robert,merkel&benambra,org> on Monday March 14, 2005 @11:38PM (#11940581) Homepage
    It's really interesting to see that this comes from the University of Saarland. Saarland is a rather out of the way [wikipedia.org] part of Germany, near the border with France and Luxembourg.

    It's rather pretty in a European countryside kind of way - hills with wine grapes on them, big rivers with boats cruising up and down, and big vegetable gardens everywhere (Germans sure love their vegetable patches) - though I doubt it's the kind of place too many international tourists visit. Not the kind of place you'd expect cutting-edge graphics research either; but then, you find all manner of interesting research in all manner of places. Even Melbourne, Australia :)

    Hi to any residents of Saarland reading this - are they holding the German round of the World Rally Championship there this year?

    • ...and Cray was located in LaCrosse, Wisconsin.

      (another place where bratworst are consumed)
    • Yeah, melbourne has nothing but tonnes of finance companies , call centres and medical/ lawer companies, no real research , oh and a .au registra.

      Sure theres a handfull of gaming companies though to get in that line of work you already have to have been in a team that made a game or 2 or 3 and have a degree (how many degree people know gaming?) I know companies want quick results with no trainups, but comon, if someone already has done 3 games, then they are probably burned out tired from that type of wor
  • by Lisandro ( 799651 ) on Monday March 14, 2005 @11:39PM (#11940589)
    ... are impressive (here [uni-sb.de] and here [uni-sb.de], for example). They don't look like much and might appear a bit dull but the ammount of details in reflections and such is surprising.
    Call me a kid, but this amazing technology appears and all i can think is how cool would it be to see enemies coming behind you reflected in a sphere...

    Too bad there's no video - but then again, the poor server is doing bad enough as it is.
  • What kind of FPGA? (Score:3, Interesting)

    by brandido ( 612020 ) on Tuesday March 15, 2005 @12:04AM (#11940720) Homepage Journal
    Working with FPGAs, I was quite curious to find out what kind of FPGA they are using - both Xilinx and Altera have some advanced hard functions (such as Multiply Accumulate functions, Block RAM, etc) that seem like they could have a huge impact on the abilities of this board. Unfortunately, after browsing through the links, I had no luck in finding any information about what FPGA they are using. Was anyone able to find this out? Even looking at the pictures of the board, it only shows the bottom side of the board, so it is impossible to see the chip markings!
  • by Speare ( 84249 ) on Tuesday March 15, 2005 @12:06AM (#11940730) Homepage Journal

    Are you sure the Boeing thing was raytracing 350 million polygons? Or just traditional raster pipeline rendering?

    See, the reason I ask is, you generally get away from raytracing polygons and raytrace against the actual nurbs or other mathematical surface definitions. That's the point. You don't feed it to simple scan-and-fill raster pipelines.

    • I can't speak for the chip yet, but I've read the papers on their software system and know that it gets a good part of its speed by rendering triangles only. The secret is that they have a cleverly laid-out, cache-friendly BSP tree of the triangles and their code uses MMX SIMD instruction to intersect 4 rays at a time with the triangles in the tree. Rendering triangles only lets them get away with a more compact (and therefore more cache friendly) memory structure and lets them avoid the performance penal

      • If it's true that they're only tracing triangles, then this isn't "real" raytracing to me. I'm sure it will be cool and useful, but there's a lot of things you can do with ray tracing that require more than triangles. Raytracing against solid objects with intersections and whatnot (ala Povray) can do a lot of things that this can't (like correctly render a crystal sphere with something behind it). I do hope hardware raytracing takes off though. The 3D we have to day is all an optimization hack, hence th
    • Most ray tracing renderers trace triangles. Blue Sky's CGI studio is the only renderer that I have heard of that traces nurbs surfaces directly.
  • Anti-Planet (Score:5, Interesting)

    by KalvinB ( 205500 ) on Tuesday March 15, 2005 @12:18AM (#11940765) Homepage
    Anti-Planet Screenshots [virtualray.ru]. Anti-Planet is a FPS rendered entirely using ray tracing. It requires an SSE compatible processor (PIII and above. AMD only recently implemented SSE in their processors). This has been out long before Doom 3 and runs on systems Doom 3 couldn't possibly run on and the graphics tricks it does are just now being put into raster graphics based games.

    That, along with Wolf 5k [wolf5k.com] inspired me to start working with software rendering. I think ray tracing will eventually be the way real time graphics are rendered in order to keep upping the bar for realism.

    Real Time Software Rendering [icarusindie.com]

    I'm working on tutorials covering software rendering topics. The tutorials start by deobfuscating and fully documenting Wolf5K, cover some basic ray tracing and are now going through raster graphics since the concepts used for raster graphics apply for ray tracing as well. I'll be returning to do more advanced ray tracing stuff later. The tutorials also cover an enhanced version of Wolf5K [icarusindie.com] written in C++ that is true color and has no texture size limitations.

    • Anti-planet looks cool. It reminds me of an old DOS game (the name escapes me) where the characters were all made based on ellipsoids: since they look as an elipse from every point of view it worked very well (and fast - quite faster than plain polygons IIRC) in the limited hardware back then.
    • Re:Anti-Planet (Score:4, Informative)

      by bani ( 467531 ) on Tuesday March 15, 2005 @12:46AM (#11940878)
      amd only "recently" implemented sse? they've had it since 2001.

      more recent amd chips have sse2, and sse3 on amd64 is just round the corner.
  • I was under the impression that real time raytracing was still far from practical. I know that Pixar's Renderman uses a much more efficient algorithm, and it still takes on the order of hours to render a full-format movie frame on beefy hardware (I know that's large, but raytracing should scale fine, and we aren't talking about millions of times more pixels). IIRC they use a raytracer only when they need reflections, and those frames are extremely expensive in terms of CPU time.
    • Cannot they use one cpu or machine per raster line, ie 1080i needs 540 cpus or 270 dual P4s/G5s.

      To render 2048 pixels in 41milliseconds shouldnt be too hard, thats 49 pixels in 1ms. About 1/1000th of 3ghz cpu which is about 2.5million instructions, or 51000 per pixel.

      Though pixar has lots of money, and instead of just buying 270 G5 servers, they could make their own FPGAs to do it.

    • Re:Surprising (Score:2, Informative)

      by iduno ( 834351 )
      If you have a look at the sample pictures they aren't extremely detailed compared to that used by pixar. To get something running near real time at the quality pixar uses you would need hundreds, if not thousands of better and more expensive FPGA's running concurrently to produce a reasonable frame rate. Considering some FPGA's can cost around $1000/piece for the better ones, this could make it too expensive to put together something at the present. While FPGA's are good for designing hardware to handle ray
  • Mirror to video (Score:2, Informative)

    Here is a mirror [cmu.edu] to the video.
  • by cheekyboy ( 598084 ) on Tuesday March 15, 2005 @12:52AM (#11940901) Homepage Journal
    When the magic emotion engine of ps2 was announced, I thought, hmmm are they going to for once try realtime raytracing in hardware or cont with tired old polygon rendering.

    Imagine if nvidia threw in an extra 5m transistors for a raytracing option ;)

  • PS5, Xbox Next II? (Score:3, Insightful)

    by KrackHouse ( 628313 ) on Tuesday March 15, 2005 @01:14AM (#11940982) Homepage
    I wonder if they're thinking ahead enough to consider a combination of the new Physx chip with a few of these things on a multicore chip. George Lucas just merged his game and movie production studios, seems to be a real trend.
  • It's odd that they think that this is news considering that Advanced Rendering Technology [art.co.uk] were building 3D ray tracing hardware a decade ago.
  • HQ RT Ray Tracing (Score:3, Insightful)

    by SnprBoB86 ( 576143 ) on Tuesday March 15, 2005 @01:28AM (#11941020) Homepage
    As interesting and promising as this is, it is currently useless because it seems to produce low quality renders.

    The screenshots on the site all show images that could easily be rendered with much greater quality and efficiancy using shadow maps or stencil shadows and manual matrix transformations with portal rendering for reflections.

    When this hardware can render scenes on par with that of a professional software ray tracer in real time, then there will be some serious consumer demand.
  • by sacrilicious ( 316896 ) <qbgfynfu.opt@recursor.net> on Tuesday March 15, 2005 @01:36AM (#11941047) Homepage
    Within the last five years I worked for a company that made 3D rendering chips. The operation that was encoded in hardware was that of testing a ray against a triangle; on the chip produced by my former employer, this operation could be done in parallel something like 16 times, using only one or two clock cycles.

    Once this functionality was achieved, there were some contextual architectural decisions to be made about what asic would include these gates. The company decided to implement these gates on a chip that had about 16MB of ram on it and its own execution unit (vaguely like one of the subchips in IBM's upcoming cell architecture, IIUC) and then to put arrays of these independent exec chips on daughter cards.

    Many of these decisions were trying to solve the specific problems of raytracing, e.g. how do we get geometry info into the chips efficiently, how can we parallelize the running of shaders so they don't bottleneck things, etc. These problems manifested themselves quite differently than they did for zbuffering hardware, and there were lots of clever-yet-brittle constructs used which could be shown to work in specific cases but which had pot-holes that were hard to predict when scaling or changing the problem/scene at hand.

    Rather than selling these chips themselves, the company decided that programming them was hard enough that the company itself would package up the chips into a "rendering appliance", which was essentially a computer running linux with a few of these daughtercards in them. For a software interface to rendering packages, the company chose Renderman. The task then became to translate rendering output from disparate sources (Maya, etc) into renderman expressions, and this was devilishly hard to get right. Each rendering package had to be individually tweaked in emulation, and some companies didn't help out much with info, and even those that did weren't able to supply all the info needed in many cases... my former employer ended up chasing un-spec'd features down ratholes.

    The end result was really a disaster. Nothing worked quite right, which was problematic because these chips were marketed not just as fast but as faster drop-in replacements for existing software renderers.

    I find it interesting how this entire tsunami of problems snowballed from the initial foundation of how raytracing algorithms (and therefore hardware) are different from zbuffering.

    • You're quite right about the differences between rasterizing and ray tracing. In my RT library, polygon (and other object) intersection tests are not the limiting factor, not even close. Using a proper scene subdivision structure you generally only do a couple intersections per pixel, and then only one shading operation. This means a 1 megapixel display running at 60 FPS need only do 60M shadings per second (barring reflections etc). The bottleneck in my code is a recursive tree traversal. Unfortunately onc
  • by Lerc ( 71477 ) on Tuesday March 15, 2005 @06:58AM (#11942053)
    So many people are dismissing ray tracing out of hand because the screenshots are not as pretty as some from the latest games.

    Let me ask you. Why would you expect teams of electrical engineers/mathematicians/programmers be able to produce a prettier image than a team of extremely talented artists.

    The job of the artist is to make it look pretty. The job of the guys making this chip is to provide features and to make it go fast.

    So it's not fast enough yet. It's a prototype. When it can render decent resolutions at decent framerates then bring in the artists and see what they can do with it.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...