Stippling As Fast 3D Technique 185
An anonymous reader writes "This Stippling
effort wins best paper at IEEE Boston conference. Could real time medical rendering be whizzier than Id?"
The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone
whizzier than id? (Score:4, Funny)
Re:whizzier than id? (Score:2, Insightful)
i think the only thing revolutionary about it is its application and the fact that it is being used for VOLUMES - which is really smart.
it's basically a POINT CLOUD which is the data that you get back from a lot of SCANNERS used to scan models and people for movie special fx.
but this point cloud doesn't only go for surface features but volume detail.
and let's not forget - it's just RENDERING VERTICES.
so at most, you could have colored dots but not any kind of texturing or advanced surfacing....
so not that sexy for games but really cool if it speeds up medical imaging.
not brilliant for technology (at least on renderside) but the idea is sharp.
jin
Re:whizzier than id? (Score:2)
surface features? yes, and even more! (Score:2, Interesting)
of course, texturing only looks good if the voxels are dense enough and form a solid, whereas the idea of the technique we're talking about here is all about rather sparce voxels. you could still colour them in to make arteries look different from veins or something.
an issue with which i'd be much more concerned: what kind of hardware acceleration can you get for that technique? i doubt that it renders 10x faster than polygons with a comparable level of detail on your vanilla geforce.
The real question is... (Score:2, Funny)
Re:The real question is... (Score:1)
It's really not that far out (Score:5, Insightful)
Re:It's really not that far out (Score:5, Informative)
Actually, modern hardware can be made to render dots only (ie vertices of polygons/triangles) as opposed to rendering the whole shaded surfaces. It's not a hack by making a small enough surface that looks like a dot, it's just actually rendered as dots. For those interested to see, there's a demo [nvidia.com] for nVidia cards where you can tell it to render dots only...
I haven't read too much detail about this, but if IEEE says it's the best paper, they must be doing something different than normal cards are doing, ie probably bypassing normal rendering methods which use matrix multiplications heavily, and instead making some small assumptions - like maybe no perspective correction - and going with faster smaller transform equations...
If that's not the case, I give them a *yawn*.
Re:It's really not that far out (Score:3, Insightful)
A point sprite by any other name... (Score:1)
Re:A point sprite by any other name... (Score:2)
To come so far to turn back. Ahh, technology.
Misuse (Score:1)
(-1 Defaming the Glorious Arts)
Or can we say... (Score:2, Funny)
Id is great and everything, but gee, I hope the answer is yes, they probably can.
Re:Or can we say... (Score:2, Flamebait)
You think that a few academics could beat them at graphics programming when no one else has been able to? That seems silly to me. That's like saying, "He could win a race in the olympics but he is too busy to enter."
Until some professor creates a better graphics engine than whatever Id is producing, I am afraid they will have to deal with it: A college drop out is kicking their butt...
Re:Or can we say... (Score:2)
One thing that is clear is that Carmack made the most of acedemic research. His genius lies in the drive and ability to exploite what's out there.
Re:Or can we say... (Score:3, Insightful)
Academics are good at coming up with applicable theories. There's a world of difference between a theory (and what is necessary to create one), and its application...
Id has know-how... The same know-how that the ironworker gets from handling iron and knowing small things like how it behaves under certain conditions... This know-how, the scientific doesn't have - or need.
Id is a software artisan.
If you find what I just said theoretical, take this simple example: Id probably spent weeks just optimizing the asm routine to draw a line.
The scientific wouldn't be interested in that... all they would care about is to prove that it can be drawn, but is left to the reader as an exercise to make it render fast...
Re:Or can we say... (Score:2)
Btw, I worked out how denther got it faster than the fastest algorithm that they published - they set the mask to write to four seperate buffers simulatanously. I forget the details - it's been many years, I was very pleased with myself
Re:Or can we say... (Score:1, Funny)
His name was Dennis Polla, U of MN EE department. One of the best profs I ever had.
Re:Or can we say... (Score:1)
No polygon replacements. (Score:5, Informative)
Abstract
NON-PHOTOREALISTIC VOLUME RENDERING USING STIPPLING TECHNIQUES
This is obviously a compromise approach. There's no way this would be able to make photorealistic games.
The difference between medicine and gaming is that with medicine, you have a real-life object whose structure whose PROPERTIES you're trying to recreate realistically, regardless of how off-color or computer generated it appears.
With gaming you have an object that's computer generated, whose APPEARANCE you're trying to recreate, with lesser regard to the properties within that object. For instance, most gaming models consisting of polygons have hollow insides...
People at Id don't bother to render and model the organs. People in medicine don't care about having models of human hearts bumpmapped or glossy.
This is supposed to be news for nerds. What's with all the mindless generated hype?
Re:No polygon replacements. (Score:3, Insightful)
Beside the fact that modelling such information for a game would be ludicrously time-consuming, I fail to see why this technique could offer an advantage to the display of 3-D graphics in a gamin sense - and I doubt it's actually faster in terms of the amount of time it takes to get mathematical data translated onto a cathode ray tube. All the article says is that it's faster than previous techniques in medical imaging. The article doesn't say what those techniques are, but since I can't for the life of me see how a CT scanner would get polygon information out of x-rays, I think we can all be sure that they aren't at all similar to what the Quake 3 engine is doing.
Re:No polygon replacements. (Score:5, Interesting)
Also, maybe you WOULD see more of this in games if it could be done in real time. Just because all we have now is polygons doesn't mean that's the way it has to be.
Re:No polygon replacements. (Score:2)
Of course, IANADoctor, but I would imagine that this technique would also be very useful in so called "4D" [gemedicalsystems.com] ultrasound systems. Increasing the speed of rendered frames with transparency would allow visualization of fast motion such as heart valves in 3 dimentions and real time. I don't think this is possible with any method except ultrasound (spin relaxation times make MRI unfeasable and CT's have to image serially with individual slices by moving the patient).
Incidentally, in my opinion, I don't think there many examples as perfect as this technology is of the Arthur C. Clarke quote of "Any sufficiently advanced technology is indistinguishable from magic." Think about it, if you were a person living just 200 years ago what would you think of a device that when touched to someone's body, allowed you to see what was happening inside live on a screen?
Re:No polygon replacements. (Score:2)
Anyway, we opted for the normal ultrasound but I was extremely impressed with the process. The individual 2D ultrasound images do not do it justice. In the live version, I could definitely count fingers and toes and even reconstruct a lot of the 3D structure in my head.
My point is that the radiologists are really, really good at spotting abnormalities in these grainy 2D pictures. So I'm wondering if these 4D ultrasounds aren't mainly just for parents. Not that that is necessarily a bad thing - but maybe it wouldn't be a huge leap for medical diagnosis.
What I've gathered so far from all these baby books I've been reading is that most prenatal abnormalities cannot be fixed. When something can be fixed, the risks of fixing it are really high. So they are only even going to try fixing the very major problems. I'm guessing that ordinary 2D ultrasounds can probably pick up these major problems. Maybe this will change though as the risks of prenatal surgery go down.
Montecarlo (Score:2)
The thing is, calling it "stippling" ignores the pre-existing term "montecarlo", which is precisely what is going on.
Similar techniques have been used in the 2D world to generate half-tone images without the use of regular half-tone screens. I believe in that context, it is called generating pseudo random halftones. Floyd-Steinberg is a deterministic version of the same idea.
Re:No polygon replacements. (Score:1)
Re:No polygon replacements. (Score:2)
Re:No polygon replacements. (Score:1)
With gaming you have an object that's computer generated, whose APPEARANCE you're trying to recreate, with lesser regard to the properties within that object. For instance, most gaming models consisting of polygons have hollow insides...
While what you're saying has been true traditionally, it seems to me that more and more game creators are (re-)discovering that it's the physics and structure of real life that give rise to the best appearences.
IOW, the more appearances need to be real, the more real the models need to be. It turns out that more often it's actually simpler to model reality than to recreate its appearance in some other fashion. That's why research institutions and governments spend so much money on exabit computers to model reality better.
[ramble]Hmmm... I wonder if someone has already done a proof that the mathematical calculation of a reality is more expensive than the reality itself... That it's more efficient to have actual reality than to try to model it on a computer. [/ramble]
Mattcelt
Re:No polygon replacements. (Score:3, Interesting)
That's only one side of the issue. For large 3D objects, you're correct. Billboarding a la Doom is definitely a fawing trend. But appearance models are very popular in current 3D and not going away anytime soon.
Think of texturing as an appearance model. Simulating the actual phenomenon of light hitting individual molecules is very heavy, but that's what you're going to need if you're trying to solve the true properties of a real-world material instead of just modeling its appearance.
The same goes for using programmable texture for modeling fur, for example. Modeling the individual hairs one by one with polygons is computationally much more intensive, and the results aren't necessarily better, unless you're doing raytracing to account for the scattered light in the fur. Computers must become a lot faster (millions, billions, gazillions) before appearance models are going away.
mathematical calculation of a reality is more expensive than the reality itself
That may well be true. Luckily humans do not perceive the reality directly, so most of the information contained in it is lost, and modeling just what is perceivable (i.e. appearance) continues to be a justifiable approach.
Key is fast generation (Score:1)
every CT scan frame it generates a few points
depending on the resolution you are looking. So if
you have a 3D object the CT scan will take many
sections of it. Each section will show a lot of
points. The thicker parts will have a higher
density of points while a thinner part will have
lower density. This will give the 3D effect. And it
will do the generation a lot faster. When you want
to take a closer look it will generate more points
giving you a higher resolution picture.
It will look similar if you have seen a fractal
being generated using the point method. The longer
you wait the better the picture is.
Re:No polygon replacements. (Score:3, Funny)
Actually, I'd argue that 'rendering' (in the true abattoir sense of the word) is the whole point of Id games...
Re:No polygon replacements. (Score:2)
I guess you've never seen someone gut-shot in Soldier of Fortune II. :)
(Not id, but Quake engine)
I don't see what's new or novel about this (Score:5, Informative)
Most of the complexity in volume rendering consists of preprocessing the data (alpha testing would be a simple way, other methods involve transformations into the frequency domain, etc.) to reduce the asymptotic complexity of the set to be rendered from the naive O(n^3) to something which corresponds to the actual visible set, not the actual rendering itself.
I don't think they are doing anything different in this stage -- it's still the same dataset that needs to be worked with, after all.
Re:I don't see what's new or novel about this (Score:1)
Second, I don't think this is a volume rendering application so much as a rapid surface visualisation. As the data for medical imaging is often points from some sort of scan to start, being able to stay within the point framework can conceptually reduce the vertex decimation complexity. However, this is difficult to see how this applies here as the no reference to the work is directly provided in the article.
Third, the "new" part about this is that they've successfully applied the concept to both NPR and imaging to produce really good medical images. The "novel" part is that medical imaging is always good for the "oooohs and aaahs"; almost makes me want to change my area of research
Re:I don't see what's new or novel about this (Score:2)
The article did say that the viewer could choose to drill down and selectively render the scene in detail using a more "realistic" method, though.
bad journalism alert (Score:1, Insightful)
Lets see... ID is a company devoted to making games where you run around aimlessly killing and dismembering people.
the people who developed the strippling effect are hardworking professions trying to help mankind by developing technology that could help further medicine.
How can you even compare these two?
A truly thoughtless comment.
Re:bad journalism alert (Score:3, Insightful)
Not so. In fact, many of the developments first applied to games are usually directly applied to the medical field and other visualization fields.
Re:bad journalism alert (Score:1)
And you read Slashdot? ;-P
Re:bad journalism alert (Score:2, Insightful)
The army has used Id's technology before for training simulators, so it can be said that their software DOES help America in at least one way other than entertainment.
so that's what they call it.. (Score:1)
In stippling, also known as pointillism, the artist creates numerous dots with paint, ink or pencil to produce gradations of light and shade, forming an image."
In geography class we used to do this to draw deserts on the maps. Funny though, we called it 'Lets piss the teacher the off' and the whole class would do it all at once. Oh well.
This new application of it looks pretty promising. I used to work at a hospital and I saw the CT scans being done. This really would be a nice QUICK way to quickly see whats going on.
I never know ATI/NVIDIA hires strippers... (Score:1, Funny)
Re:I never know ATI/NVIDIA hires strippers... (Score:2)
Then again, most lonely CS majors end up at strip clubs at the end of the week, giving their hard-earned cash to boob girls. It all comes full circle.
So I guess, in a way, you can say that ATI/NVIDIA already hired strippers, and they're doing a fine job.
Isn't this just... (Score:1)
Re:Isn't this just... (Score:1, Informative)
Voxel? (Score:1)
And then they say 3D stippling...so in other words, they reinvented voxels?
-- Tino Didriksen / projectjj.dk
I knew... (Score:1)
While on the subject of real-time filters... (Score:5, Informative)
That said, you CAN have sketchy-looking Quake if you want with NPRQuake [fileshack.com]. I've tried this and it looks incredible- it's a shame no commercial games have used this technique yet. Reminds me of that 80s music video where the gal walks into the mirror, and everything's all "pencilly-looking" but in real-time... now what was that damn song? (racks brain)
Also check out Waking Life [imdb.com]. It's available on P2P as I write this, but you didn't hear that from me, and you're better off renting the DVD for all the extra goodies. It's not as pretentious as many make it out to be, and the visuals alone are worth it.
Re:While on the subject of real-time filters... (Score:1)
Damn cool video, eh song.
[OT] While on the subject of real-time filters... (Score:1)
Reminds me of that 80s music video where the gal walks into the mirror, and everything's all "pencilly-looking" but in real-time... now what was that damn song? (racks brain)
"Take On Me" by A-Ha.
Re:While on the subject of real-time filters... (Score:1)
Re:While on the subject of real-time filters... (Score:2)
A-ha's Take on Me (MTV Real clip [mtv.com]).
Re:While on the subject of real-time filters... (Score:2)
I believe that was "Take on me" by Ah-Ha.
Re:Take on Me - Ah ha. (Score:2)
one of the first (if not the first) to use animation in a video
Yesterday, I saw the video of 'Leader of the pack', by the Shangri-La's, from 1972. Animated.
It's just possible that this was something made by the BBC, not an official video clip, but I'd be very surprised. It looked vintage 1970s (ie, bad).
To read more... (Score:3, Informative)
Next advancement in medical imaging (Score:5, Funny)
...a case of Art preceeding Science (Score:1)
Though it isn't exactly an example of an internal organ, noted author Kurt Vonnegut has already made progress in this field. In one of his books (Cat's Cradle?) he makes use of the ASCII rendering technique to provide the reader with a detailed visual of a sphincter:
Whoa... just imagine how many polygons that would have taken!
Re:...a case of Art preceeding Science (Score:2)
This is data visualization , not pretty graphics (Score:4, Insightful)
This technique is meant to be a fast ( real time ? ) method of viewing medical data , like watching a CAT scan as it's happening. It's *not* attractive , it has no textures , it doesn't render the organs with all their colour or bump maps. What it *does* do is give the surgeon an immediate source of information on the status of the patient's condition. Very interesting stuff , good application of a technique to a real need. But it's not anything to do with Id. It *won't* make Quake 4 any faster.
As most people know , including most Slashdotters ( I hope ) , 3D doesn't begin and end with Video Games. Other things use the technologies too.
How exactly is it different.... (Score:2, Insightful)
Ancient knowledge brings amazing possibilities. (Score:3, Funny)
From the article:
Wow, think of what you could do with this! You could print grayscale images using only one color of ink, or color images using only three or four! No longer would we be limited to viewing images on expensive computer and television screens. We could actually print the images on a super-thin sheet of cloth or wood. We could call this new device "paper".
Ancient artists sure were smart.
Maybe this is unrealistic... (Score:2, Funny)
Who needs fancy 'wares (Score:1)
"This image of a human cranium was created with a new kind of computer-imaging software that uses the ancient technique of stippling to convert complex medical data into 3-D images that can be quickly viewed by medical professionals. Data from CT scans were converted into dots to create the stippled image. Cave dwellers and artisans used stippling thousands of years ago to create figures by painting or carving a series of tiny dots."
See what I mean?
ever played q2 (Score:1, Informative)
Anyone else remember this?
Not really that exciting (Score:5, Interesting)
Probably, but not because of this. This technique would have very little use in a gaming environment. Indeed, algorithms indended for medical imaging rarely do. In this particular case, the dotted images don't really provide any sort of occlusion. That is, you can see right through the image to whatever is behind it. Great for medicine (where the whole point is to see inside the patient's body), bad for games.
As a matter of fact, when I read this, my only thought is "well, duh". I do 3D graphics myself, and I am having a hard time believing that this technique is new. Particle system rendering? There must be something more to this that the dumbed-down article isn't telling us. Maybe they have a new, advanced algorithm for deciding exactly where to place the dots... that really must be it. As long as we're reporting on low-level algorithms, I have a new algorithm I came up with for drawing borders on the silouette edges of cartoon renderings efficiently. Do you want to hear about it? No? Aww...
Re:Not really that exciting (Score:2)
My first thought on reading it was guessing that the amount of heavy computation required to turn raw imaging data into one of these pictures is much less than the amount required to transform the data to be fit to render in other ways. I'm betting the dots are much closer to what they actually get from their device.
uh oh (Score:3, Funny)
I really, really hope that was an unintentional pun.
How Nice (Score:1, Troll)
While I for one am delighted to see that the usual low expectations of tax-dollar-funded research have in this case been confounded, I can't help but wonder how much genuine innovation has been stifled by the need for researchers to jump through the ususal hoops for their precious grant money, to say nothing of the frustration these researchers must feel as their hard work skips merrily off into the public domain.
All water under the bridge, I supppose. I wait with delighted anticipation for some hot for-profit startups to get ahold of this software and, with the invisible hand of the market as their guide, take this technology (and hopefully my mutual funds! ^_^) to astounding new heights.
Downloadable (Score:5, Informative)
Qt and GPL'd (Score:1)
Apparently Tuebingen shall the rendering code [uni-tuebingen.de] that uses Qt.
GPL.Let's hope... (Score:2)
Um. What is this crap? (Score:5, Interesting)
And how bout these amazing captions? They read like a typical
IMAGE CAPTION 1: This image of a human cranium was created with a new kind of computer-imaging software that uses the ancient technique of stippling to convert complex medical data into 3-D images that can be quickly viewed by medical professionals. Data from CT scans were converted into dots to create the stippled image. Cave dwellers and artisans used stippling thousands of years ago to create figures by painting or carving a series of tiny dots. More recently, 19th century Parisian artist Georges Seurat used the method, also called pointillism, to draw colorful, intricately detailed works. Because dots are the most simple visual element in a picture, they also are ideal for computer visualizations.
IMAGE CAPTION 2: This picture of a human foot was created with a new kind of computer-imaging software that uses the ancient technique of stippling to convert complex medical data into 3-D images that can be quickly viewed by medical professionals. In this image, data from CT scans were converted into dots to create the stippled image. Stippling uses tiny dots to create an image. Because dots are the most simple visual element in a picture, they also are ideal for computer visualizations.
Oh well, at least their subjects and verbs agreed in number. (...data...
Re:Um. What is this crap? (Score:2)
So? What's your point? The author was just mentioning in a lighthearted way that computer graphics unrelated to games were getting some attention.
you'd think it's simple, but it's not (Score:5, Interesting)
It is not as simple as it seems. You want the nearer bones (or whatever structure) to show up more, but not completely obscure what is behind. And you want the stuff behind to look "behind". But how?
It is not the same problem as calculating normals of polygons to see which surfaces are facing the viewer, sorting things by depth, and finding out what is completely obscured by what else. Go back, and think again.
I'm guessing (without reading the paper), that the point of using dots is that the dots are not infinitely small, but rather have a small measureable size, and so the nearer dots are drawn larger, but that all dots are small enough that they don't tend to "hide" each other in the Z direction, but rather "pile up" a bit to make the piled up places darker. This sort of "implementation" is interesting I think solely because one might be able to implement it in a way that makes use of fairly standard operations implemented by vroomy graphics hardware. (Ie. it is not otherwise an obvious implementation of the desired operation, and I'll guess that the initial reaction of the people who built the graphics hardware/driver is "hey, you're abusing it!", followed almost immediately by "wow, cool!". It's as absurd and wonderful as if you drew a cloud of smoke between you and another object by drawing each particle in the cloud.)
Re:you'd think it's simple, but it's not (Score:3, Informative)
No. Having had a quick glance at the paper (available through the link to the renderer provided elsewhere), the techique centres around generating a number of points per voxel that varies according to the shading you want at that location in the image. Hence, the density of dots in any given region will vary in proportion to the "darkness" of the underlying data set in that region. The tricky part is working out the number/distribution of points that will produce a viewable image with all desired features highlighted, given that most images will be viewed at a resolution that would cause the object rendered to appear as a black smudge, even at a maximum point density of 1 point per voxel.
Re:you'd think it's simple, but it's not (Score:2)
This is actually pretty easy. For each row of voxels running along the Z-axis from the "front" of the dataset to the "back", generate a sum of the densities of the voxels in the row. The end result is a two-dimensional greyscale image which you can then dither according to your preference of dithering algorithms.
Since this makes rotation fairly costly, my guess is what they are actually doing is doing a simple transform of an n-bit three-dimensional array into a somewhat larger 1-bit three-dimensional array, with the dithering happening in actual threespace and then being frozen; rotation is then a simple matrix transform applied to a few tens of thousands of fixed 1-bit points -- a trivial operation with modern CPUs.
My guess is that the graphics hardware is not some fancy 3D accelerated card at all, but just an ordinary 2D business desktop card.
Isn't new... Future Crew did it years ago (Score:2, Interesting)
Oh yeah... (Score:2)
Pointalism (Score:3, Funny)
Now, a nice impressionist rendering would be great. Although, I'm not sure I'd want a neurosurgeon screwing around with my brain based on an artistic impression of it.
"Well, see the giant green splotches represent perverted thoughts and... well, there isn't much else to speak of. Apparently, this small yellow part over here is occupied with programming, and it's slowly being invaded by a brown sludgy part which wants some more coffee. Overall, the painting's not worth much, and I certainly wouldn't want it hanging over my couch. Ok... let's make the first incision."
id did this already (Score:4, Interesting)
You missed the point. (Score:2, Insightful)
Re:id did this already (Score:2)
No. Stipple alpha had been used in computer graphics LONG before then. Heck. the arcade game Hard Drivin' (1989) used it extensively.
Carmack really doesn't innovate a hell of a lot in the graphics techniques he uses, and I think he'd agree. He's just always the first one to do his research, take existing high-end rendering techniques, and implement them really well on the PC.
I don't see the point (Score:2)
Re:I don't see the point (Score:2, Funny)
Game Application (Score:2, Interesting)
For instance, the cel shading technique, by which specifically designed shading networks are applied to polygons in order to emulate the look of 2 dimensional animation. For the same reasons that technique has garnered so much popularity over recent years in the gaming industry, an application like this may find similar inspiration.
In addition, this type of rendering goes beyond gaming, right into the entertainment industry. Art studios are constantly looking for new ways to present their animations. There have been several festival animations, done in 3D environments, that were purposely rendered in 2-dimensional ways. Who's to say you won't see this method used in the future?
Yes! (Score:2)
Interesting question.. (Score:1)
Mightier than ID? (Score:1)
If it was useful he would already have used it (Score:1)
No, because if it was worthwhile, John would have already used it. It's hardly an unknown technique. I forget the name of the company, but it was used about about 8 years ago by a french company in a 3D beat-em-up. The game mags described the effect as 'wibbly'.
It's really hard to beat boring old polygons to get a reasonably convincing and solid look for not too much processor. Maybe something like this could be used to add detail around the silhouette edges so you don't see so much angularity there, which is and has been for a while, the weakest part of realtime renderings.
Technique was **PATENTED** (Score:1, Funny)
It is time to PAY Grog's estate and not STEAL the work of these innovators.
I don't think the rendering is the problem (Score:3, Insightful)
I think the reason for doing a stippled technique is to cut down on preprocessing of the data set, rather than to speed the actual rendering of the graphics.
You're trying to visual a 3d volume, but you don't have a surface map. What you have is a series of bitmaps taken at different z-depths. You can either just render the z-ordered bitmaps with appropriate transparency (expensive for your graphics hardware) or you can try to calculate surfaces and render polygons. Calculating the surfaces can be extremely expensive (think hours of computer time on what was not too long ago a super-computer class machine).
It looks like what they've done is find a way to render the bitmap data with (a) minimal preprocessing and (b) not needing hundreds of megs of video RAM.
A few years ago I shared a VR setup with some people visualizng seismic data. They were always complaining about the onyx only having 256 megs of video ram...
This is really cool... here's why (Score:3, Insightful)
I felt I should try and explain a few things to help the less, um, "graphics savy" among us appreciate and understand what's going on here.
First, we need to know what "non-photorealistic rendering" (NPR) means. It is NOT (as some fool mentioned) a "tradeoff" and inferior to photorealistic effects. It only implies that the rendering does not use a light-transport model to acheive its results (hence not realistic in terms of how lights/cameras work).
This is a GOOD thing. In a CAT scan you don't want a picture of the guy's head. You want an image giving you useful information about the internal structures of the guy's head. Photorealistic rendering would be as useful as taking a picture your patient.
The problem of getting a useful image from a large volume dataset is non-trivial. Doing this at interactive rates is even tougher. Further, drawing realistic stipples is difficult in its own right, because of the nature of the stippling technique (spacing and distribution are used to convey transparency, as well as contour).
The images produced by this technique are amazing, and look very close to what one might imagine an artist would produce for a textbook. That's incredible, people! If you compare these images to those that doctors currently have to look at (slices, color coded density maps, etc) you'll notice that the stipples are much easier to understand, and look very natural.
Congrats to the Purdue team and kudos to Slashdot for covering a real comp sci paper, despite the fact that the yokels in the groups think that it has something to do with Quake 2. (Groan)
How close to splatting? (Score:2)
Comment removed (Score:3, Funny)
Stippling: everything old is new again... (Score:3, Informative)
For many decades, stippling was the standard technique used for rendering biological or medical illustrations. I suppose it has something to do with the printing processes used for line art being cheaper than those for half-tones.
Indeed, I see that this journal [www.uqtr.ca] and perhaps others still say "Use 'stippling' and 'hatching' techniques to achieve tonal quality. Avoid the use of shading (pencil, wash, or airbrush) for a tonal effect..."
Now, if we just had a font that reproduces the look of Leroy lettering?*
*(OK, OK, a Leroy lettering set consisted of a sort of stencil, in which the letters were merely engraved deeply rather than perforating all the way through, and a little pantograph device. The pantograph had a technical pen and a tracing point. As you followed the stencilled letters with the tracing point, the technical pen would make corresponding motions on the paper. Very common for captions in technical illustrations in research papers, museum displays, etc. Obviously too neat to be handwritten, yet obviously not typeset...)
In Sun MicroSystem 3D software in 1980s (Score:2)
Before Toy Story... (Score:2)
At a guess, the technology didn't go anywhere because it was too slow to be of much use.
Stippling is faster because... (Score:2, Informative)
This is an advantage to doctors and medical professionals in that they don't care if an image is perfect, but they would benefit greatly from an improper image that more clearly represents information in real time.
Dots are simpler in 3d, especially for this, because everything is purely ratios and percentages, rather than exact formulas for edges of lines.
As stated in a previous post, the goal for this is the PROPERTIES, not the graphical capabilities. Have you ever seen a sonagram? Those images are extremely cryptic, but with a trained eye, they are extremely useful because they depict the truth in not an exact way, but a recognizable way.
We're human, we can use our brain to interpret this stuff.
stippling (Score:2)
Somebody keeps taking my stippler
Bill said I'm supposed to have my own stippler
I'm going to set the building on fire
To put a few things straight... (Score:2, Informative)
This technique claims to reinvent neither voxels nor point sprites, but it's rather a way to convert voxel datasets (like CT scans) into a threedimensional stipple representation. Imagine a particle system with the particle density at a given place about proportional to the density value in the voxel data at that place.
Ok, it's not that easy, there are a few algorithms described which allow for more "intelligent" placement of particles to enhance features and the general transparency, which makes up the main part of the paper.
The trick is then that these particles can be sent to a modern graphics card (GF3 and up) for immediate and fast display. You can zoom and rotate the model and your camera without having to recalculate anything, which makes rendering really fast.
A few future thoughts in the paper are about using vertex shader programs to shift some of the per-particle calculations to the GPU which would anhance rendering speed and clarity further, as now angle-dependent particle calculations would be possible without any significant overhead.
I didn't find any real thoughts on decreasing the complexity of the precalculation process tho. Still, having to calculate the major part only once per dataset instead of once per frame is a significant improvement over similar methods (such as most triangulation methods which require a complete re-calc as soon as you try to look at other features of the dataset).
And the comparison with Id? Come on, this doesn't have ANY relationship with 3d games, and furthermore Carmack is a good engineer who knows to deploy new rendering techniques at exactly the time PCs are good anough for them, but all of his techniques since Wolf3d were invented years ago by the very bunch of PhDs etc who also wrote this paper
For demos of course... hmmm... this could actually make metablobs look good
Re:Voxels? (Score:1)