Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software Science

Stippling As Fast 3D Technique 185

An anonymous reader writes "This Stippling effort wins best paper at IEEE Boston conference. Could real time medical rendering be whizzier than Id?"
This discussion has been archived. No new comments can be posted.

Stippling As Fast 3D Technique

Comments Filter:
  • by Temporal ( 96070 ) on Wednesday November 27, 2002 @02:51AM (#4765532) Journal
    Could real time medical rendering be whizzier than Id?

    Probably, but not because of this. This technique would have very little use in a gaming environment. Indeed, algorithms indended for medical imaging rarely do. In this particular case, the dotted images don't really provide any sort of occlusion. That is, you can see right through the image to whatever is behind it. Great for medicine (where the whole point is to see inside the patient's body), bad for games.

    As a matter of fact, when I read this, my only thought is "well, duh". I do 3D graphics myself, and I am having a hard time believing that this technique is new. Particle system rendering? There must be something more to this that the dumbed-down article isn't telling us. Maybe they have a new, advanced algorithm for deciding exactly where to place the dots... that really must be it. As long as we're reporting on low-level algorithms, I have a new algorithm I came up with for drawing borders on the silouette edges of cartoon renderings efficiently. Do you want to hear about it? No? Aww...
  • by good-n-nappy ( 412814 ) on Wednesday November 27, 2002 @02:55AM (#4765546) Homepage
    IIRC one of the biggest advantages of stippling in rendering surfaces is that you can get a fast simulation of transparency. Check out here [sgi.com]. So maybe the same applies in 3D. The 3D stippling might allow you to simulate complex semi-transparent volumes - perhaps also avoiding some z sorting or alpha blending.

    Also, maybe you WOULD see more of this in games if it could be done in real time. Just because all we have now is polygons doesn't mean that's the way it has to be.
  • by DrunkenTerror ( 561616 ) on Wednesday November 27, 2002 @03:16AM (#4765608) Homepage Journal
    This article sucks, and the /. write-up sucks more. It has virtually nothing to do with id, Doom, or games in general. They're visualizing data sets, not shooting rockets at each other at 60 FPS (or 8 fps in the Doom3 demo ;). Rendering static, previously collected data Vs. On-the-fly rendering of a rapidly changing dynamic environment.What should one expect from an anon submission, though? :P

    And how bout these amazing captions? They read like a typical /. dupe. (similarities highlighted)

    IMAGE CAPTION 1: This image of a human cranium was created with a new kind of computer-imaging software that uses the ancient technique of stippling to convert complex medical data into 3-D images that can be quickly viewed by medical professionals. Data from CT scans were converted into dots to create the stippled image. Cave dwellers and artisans used stippling thousands of years ago to create figures by painting or carving a series of tiny dots. More recently, 19th century Parisian artist Georges Seurat used the method, also called pointillism, to draw colorful, intricately detailed works. Because dots are the most simple visual element in a picture, they also are ideal for computer visualizations.

    IMAGE CAPTION
    2: This picture of a human foot was created with a new kind of computer-imaging software that uses the ancient technique of stippling to convert complex medical data into 3-D images that can be quickly viewed by medical professionals. In this image, data from CT scans were converted into dots to create the stippled image. Stippling uses tiny dots to create an image. Because dots are the most simple visual element in a picture, they also are ideal for computer visualizations.

    Oh well, at least their subjects and verbs agreed in number. (...data... ...were...)
  • by bbc22405 ( 576022 ) on Wednesday November 27, 2002 @03:16AM (#4765610)
    I'm reading lots of comments about "how is this different from just plotting pixels?" and such. If you were given a voxel dataset, and were given the job of showing the internal structure, in a nifty, sorta-transparent, sorta-3D way ... you would likely fail.

    It is not as simple as it seems. You want the nearer bones (or whatever structure) to show up more, but not completely obscure what is behind. And you want the stuff behind to look "behind". But how?

    It is not the same problem as calculating normals of polygons to see which surfaces are facing the viewer, sorting things by depth, and finding out what is completely obscured by what else. Go back, and think again.

    I'm guessing (without reading the paper), that the point of using dots is that the dots are not infinitely small, but rather have a small measureable size, and so the nearer dots are drawn larger, but that all dots are small enough that they don't tend to "hide" each other in the Z direction, but rather "pile up" a bit to make the piled up places darker. This sort of "implementation" is interesting I think solely because one might be able to implement it in a way that makes use of fairly standard operations implemented by vroomy graphics hardware. (Ie. it is not otherwise an obvious implementation of the desired operation, and I'll guess that the initial reaction of the people who built the graphics hardware/driver is "hey, you're abusing it!", followed almost immediately by "wow, cool!". It's as absurd and wonderful as if you drew a cloud of smoke between you and another object by drawing each particle in the cloud.)

  • by wmoyes ( 215662 ) on Wednesday November 27, 2002 @03:18AM (#4765616)
    Didn't Future Crew have a pixel based 3-D effect towards the end of the Unreal demo? Its was really cool for its time.
  • id did this already (Score:4, Interesting)

    by mewsenews ( 251487 ) on Wednesday November 27, 2002 @03:44AM (#4765666) Homepage
    Does noone remember Quake 2? The software renderer had an option called "Stipple Alpha" which would render transparent entities such as water and glass using a stippling method. It was much faster than true alpha blending, and it got the job done. Carmacks like four years ahead of the curve as usual..
  • by varaani ( 77889 ) on Wednesday November 27, 2002 @03:48AM (#4765674)
    While what you're saying has been true traditionally, it seems to me that more and more game creators are (re-)discovering that it's the physics and structure of real life that give rise to the best appearences.

    That's only one side of the issue. For large 3D objects, you're correct. Billboarding a la Doom is definitely a fawing trend. But appearance models are very popular in current 3D and not going away anytime soon.

    Think of texturing as an appearance model. Simulating the actual phenomenon of light hitting individual molecules is very heavy, but that's what you're going to need if you're trying to solve the true properties of a real-world material instead of just modeling its appearance.

    The same goes for using programmable texture for modeling fur, for example. Modeling the individual hairs one by one with polygons is computationally much more intensive, and the results aren't necessarily better, unless you're doing raytracing to account for the scattered light in the fur. Computers must become a lot faster (millions, billions, gazillions) before appearance models are going away.

    mathematical calculation of a reality is more expensive than the reality itself

    That may well be true. Luckily humans do not perceive the reality directly, so most of the information contained in it is lost, and modeling just what is perceivable (i.e. appearance) continues to be a justifiable approach.
  • Game Application (Score:2, Interesting)

    by AmbientNeedle ( 629661 ) <needle.needle@org> on Wednesday November 27, 2002 @04:13AM (#4765720) Homepage
    To shrug this off as having no bearing on the gaming world may be a bit narrow-minded. Remember, just because something is in 3D, it doesn't mean you want it to look like it's in 3D.

    For instance, the cel shading technique, by which specifically designed shading networks are applied to polygons in order to emulate the look of 2 dimensional animation. For the same reasons that technique has garnered so much popularity over recent years in the gaming industry, an application like this may find similar inspiration.

    In addition, this type of rendering goes beyond gaming, right into the entertainment industry. Art studios are constantly looking for new ways to present their animations. There have been several festival animations, done in 3D environments, that were purposely rendered in 2-dimensional ways. Who's to say you won't see this method used in the future?
  • by n3k5 ( 606163 ) on Wednesday November 27, 2002 @06:30AM (#4766003) Journal
    it's wrong that you you couldn't have texturing. you could even have 3D textures---something most current gfx cards still can't do. remember the voxel terrain of comanche? they had a texture across their terrain back in 1994. you're right, "at most, you could have coloured dots", but that's exactly what a texture is about: colouring the picture elements, wheter those are pixels or voxels. not sexy for games? comanche was _damn_ sexy back in the day, you wouldn't have expected that kind of terrain would be possible on your 486 25MHz!

    of course, texturing only looks good if the voxels are dense enough and form a solid, whereas the idea of the technique we're talking about here is all about rather sparce voxels. you could still colour them in to make arteries look different from veins or something.

    an issue with which i'd be much more concerned: what kind of hardware acceleration can you get for that technique? i doubt that it renders 10x faster than polygons with a comparable level of detail on your vanilla geforce.
  • It's a Good Thing (Score:1, Interesting)

    by Anonymous Coward on Wednesday November 27, 2002 @09:49AM (#4766644)
    Visualizing volumetric datasets at interactive framerates is very useful to gain a precise understanding of the spacial relationship of the embedded structures. The idea that the paper presents speeds up the rendering by applying the technique of selective ignorance. Most other volume renderers simply use a dataset with lower resolution if the rendering process gets to slow and therefore are endangered to miss important details. If radiologists can model their visualization needs by emphazising importance of i.e. surfaces by using a higher point density in regions of high gradient magnitude, it's safer to trade accuracy for speed.
    The second thing - as another comment already pointed out - is, that photorealism might not be the primary goal to achieve with data that is aquired by measuring properties that have no canonical correspondence in the visible electromagnetic spectrum. If you think about it, it's hard to define what photorealism means in that context.

If all else fails, lower your standards.

Working...