Stippling As Fast 3D Technique 185
An anonymous reader writes "This Stippling
effort wins best paper at IEEE Boston conference. Could real time medical rendering be whizzier than Id?"
Machines have less problems. I'd like to be a machine. -- Andy Warhol
No polygon replacements. (Score:5, Informative)
Abstract
NON-PHOTOREALISTIC VOLUME RENDERING USING STIPPLING TECHNIQUES
This is obviously a compromise approach. There's no way this would be able to make photorealistic games.
The difference between medicine and gaming is that with medicine, you have a real-life object whose structure whose PROPERTIES you're trying to recreate realistically, regardless of how off-color or computer generated it appears.
With gaming you have an object that's computer generated, whose APPEARANCE you're trying to recreate, with lesser regard to the properties within that object. For instance, most gaming models consisting of polygons have hollow insides...
People at Id don't bother to render and model the organs. People in medicine don't care about having models of human hearts bumpmapped or glossy.
This is supposed to be news for nerds. What's with all the mindless generated hype?
I don't see what's new or novel about this (Score:5, Informative)
Most of the complexity in volume rendering consists of preprocessing the data (alpha testing would be a simple way, other methods involve transformations into the frequency domain, etc.) to reduce the asymptotic complexity of the set to be rendered from the naive O(n^3) to something which corresponds to the actual visible set, not the actual rendering itself.
I don't think they are doing anything different in this stage -- it's still the same dataset that needs to be worked with, after all.
While on the subject of real-time filters... (Score:5, Informative)
That said, you CAN have sketchy-looking Quake if you want with NPRQuake [fileshack.com]. I've tried this and it looks incredible- it's a shame no commercial games have used this technique yet. Reminds me of that 80s music video where the gal walks into the mirror, and everything's all "pencilly-looking" but in real-time... now what was that damn song? (racks brain)
Also check out Waking Life [imdb.com]. It's available on P2P as I write this, but you didn't hear that from me, and you're better off renting the DVD for all the extra goodies. It's not as pretentious as many make it out to be, and the visuals alone are worth it.
To read more... (Score:3, Informative)
Re:Isn't this just... (Score:1, Informative)
ever played q2 (Score:1, Informative)
Anyone else remember this?
Downloadable (Score:5, Informative)
Re:It's really not that far out (Score:5, Informative)
Actually, modern hardware can be made to render dots only (ie vertices of polygons/triangles) as opposed to rendering the whole shaded surfaces. It's not a hack by making a small enough surface that looks like a dot, it's just actually rendered as dots. For those interested to see, there's a demo [nvidia.com] for nVidia cards where you can tell it to render dots only...
I haven't read too much detail about this, but if IEEE says it's the best paper, they must be doing something different than normal cards are doing, ie probably bypassing normal rendering methods which use matrix multiplications heavily, and instead making some small assumptions - like maybe no perspective correction - and going with faster smaller transform equations...
If that's not the case, I give them a *yawn*.
Re:you'd think it's simple, but it's not (Score:3, Informative)
No. Having had a quick glance at the paper (available through the link to the renderer provided elsewhere), the techique centres around generating a number of points per voxel that varies according to the shading you want at that location in the image. Hence, the density of dots in any given region will vary in proportion to the "darkness" of the underlying data set in that region. The tricky part is working out the number/distribution of points that will produce a viewable image with all desired features highlighted, given that most images will be viewed at a resolution that would cause the object rendered to appear as a black smudge, even at a maximum point density of 1 point per voxel.
Stippling: everything old is new again... (Score:3, Informative)
For many decades, stippling was the standard technique used for rendering biological or medical illustrations. I suppose it has something to do with the printing processes used for line art being cheaper than those for half-tones.
Indeed, I see that this journal [www.uqtr.ca] and perhaps others still say "Use 'stippling' and 'hatching' techniques to achieve tonal quality. Avoid the use of shading (pencil, wash, or airbrush) for a tonal effect..."
Now, if we just had a font that reproduces the look of Leroy lettering?*
*(OK, OK, a Leroy lettering set consisted of a sort of stencil, in which the letters were merely engraved deeply rather than perforating all the way through, and a little pantograph device. The pantograph had a technical pen and a tracing point. As you followed the stencilled letters with the tracing point, the technical pen would make corresponding motions on the paper. Very common for captions in technical illustrations in research papers, museum displays, etc. Obviously too neat to be handwritten, yet obviously not typeset...)
Stippling is faster because... (Score:2, Informative)
This is an advantage to doctors and medical professionals in that they don't care if an image is perfect, but they would benefit greatly from an improper image that more clearly represents information in real time.
Dots are simpler in 3d, especially for this, because everything is purely ratios and percentages, rather than exact formulas for edges of lines.
As stated in a previous post, the goal for this is the PROPERTIES, not the graphical capabilities. Have you ever seen a sonagram? Those images are extremely cryptic, but with a trained eye, they are extremely useful because they depict the truth in not an exact way, but a recognizable way.
We're human, we can use our brain to interpret this stuff.
To put a few things straight... (Score:2, Informative)
This technique claims to reinvent neither voxels nor point sprites, but it's rather a way to convert voxel datasets (like CT scans) into a threedimensional stipple representation. Imagine a particle system with the particle density at a given place about proportional to the density value in the voxel data at that place.
Ok, it's not that easy, there are a few algorithms described which allow for more "intelligent" placement of particles to enhance features and the general transparency, which makes up the main part of the paper.
The trick is then that these particles can be sent to a modern graphics card (GF3 and up) for immediate and fast display. You can zoom and rotate the model and your camera without having to recalculate anything, which makes rendering really fast.
A few future thoughts in the paper are about using vertex shader programs to shift some of the per-particle calculations to the GPU which would anhance rendering speed and clarity further, as now angle-dependent particle calculations would be possible without any significant overhead.
I didn't find any real thoughts on decreasing the complexity of the precalculation process tho. Still, having to calculate the major part only once per dataset instead of once per frame is a significant improvement over similar methods (such as most triangulation methods which require a complete re-calc as soon as you try to look at other features of the dataset).
And the comparison with Id? Come on, this doesn't have ANY relationship with 3d games, and furthermore Carmack is a good engineer who knows to deploy new rendering techniques at exactly the time PCs are good anough for them, but all of his techniques since Wolf3d were invented years ago by the very bunch of PhDs etc who also wrote this paper
For demos of course... hmmm... this could actually make metablobs look good