Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Science

Stippling As Fast 3D Technique 185

An anonymous reader writes "This Stippling effort wins best paper at IEEE Boston conference. Could real time medical rendering be whizzier than Id?"
This discussion has been archived. No new comments can be posted.

Stippling As Fast 3D Technique

Comments Filter:
  • by pcbob ( 67069 ) on Wednesday November 27, 2002 @02:03AM (#4765357) Homepage
    Probably not, but it's certainly just as bloody... or maybe i'm underestimating doom III.
    • actually,

      i think the only thing revolutionary about it is its application and the fact that it is being used for VOLUMES - which is really smart.

      it's basically a POINT CLOUD which is the data that you get back from a lot of SCANNERS used to scan models and people for movie special fx.

      but this point cloud doesn't only go for surface features but volume detail.

      and let's not forget - it's just RENDERING VERTICES.

      so at most, you could have colored dots but not any kind of texturing or advanced surfacing....

      so not that sexy for games but really cool if it speeds up medical imaging.

      not brilliant for technology (at least on renderside) but the idea is sharp.

      jin
      • I could think of lots of effects it could be used for in games (how about as an night-vision or x-ray vision style power up).
      • it's wrong that you you couldn't have texturing. you could even have 3D textures---something most current gfx cards still can't do. remember the voxel terrain of comanche? they had a texture across their terrain back in 1994. you're right, "at most, you could have coloured dots", but that's exactly what a texture is about: colouring the picture elements, wheter those are pixels or voxels. not sexy for games? comanche was _damn_ sexy back in the day, you wouldn't have expected that kind of terrain would be possible on your 486 25MHz!

        of course, texturing only looks good if the voxels are dense enough and form a solid, whereas the idea of the technique we're talking about here is all about rather sparce voxels. you could still colour them in to make arteries look different from veins or something.

        an issue with which i'd be much more concerned: what kind of hardware acceleration can you get for that technique? i doubt that it renders 10x faster than polygons with a comparable level of detail on your vanilla geforce.
  • ...when will it be used for pr0n?
  • by Frothy Walrus ( 534163 ) on Wednesday November 27, 2002 @02:05AM (#4765364)
    Stippling is just the application of small, uniform polygons (aka "dots") in rendering images. To modern graphics hardware, a dot may as well be a polygon, so we haven't gained much in practical terms.
    • by pVoid ( 607584 ) on Wednesday November 27, 2002 @04:23AM (#4765746)
      To modern graphics hardware, a dot may as well be a polygon, so we haven't gained much in practical terms.

      Actually, modern hardware can be made to render dots only (ie vertices of polygons/triangles) as opposed to rendering the whole shaded surfaces. It's not a hack by making a small enough surface that looks like a dot, it's just actually rendered as dots. For those interested to see, there's a demo [nvidia.com] for nVidia cards where you can tell it to render dots only...

      I haven't read too much detail about this, but if IEEE says it's the best paper, they must be doing something different than normal cards are doing, ie probably bypassing normal rendering methods which use matrix multiplications heavily, and instead making some small assumptions - like maybe no perspective correction - and going with faster smaller transform equations...

      If that's not the case, I give them a *yawn*.

  • is still a point sprite ;)
  • Alright, now I can convince my friends that Georges Seurat did a painting of someone's colon!

    (-1 Defaming the Glorious Arts)
  • Can a bunch of PhDs who are all supposed to know quite a bit about what they are doing, including Purdue EE professors, a researcher or two from IBM TJ Watson, and some almost-PhD graduate students, come up with something as mighty as what Id Software can invent?

    Id is great and everything, but gee, I hope the answer is yes, they probably can.

    • by Error27 ( 100234 )
      Id created their genre and has been the technology leader ever since. Some first person shooters were arguable more fun and in Duke Nukem you could kick the head around which was pretty cool. But Id has always had the best technology.

      You think that a few academics could beat them at graphics programming when no one else has been able to? That seems silly to me. That's like saying, "He could win a race in the olympics but he is too busy to enter."

      Until some professor creates a better graphics engine than whatever Id is producing, I am afraid they will have to deal with it: A college drop out is kicking their butt...

    • From an older /. post This article [ieee.org] on the IEEE website, is a facinating look into ID's (Carmaks) programing firsts.

      One thing that is clear is that Carmack made the most of acedemic research. His genius lies in the drive and ability to exploite what's out there.

    • by pVoid ( 607584 )
      Don't confuse "technique" with "state of the art"... Or theory and practice for the matter...

      Academics are good at coming up with applicable theories. There's a world of difference between a theory (and what is necessary to create one), and its application...

      Id has know-how... The same know-how that the ironworker gets from handling iron and knowing small things like how it behaves under certain conditions... This know-how, the scientific doesn't have - or need.

      Id is a software artisan.

      If you find what I just said theoretical, take this simple example: Id probably spent weeks just optimizing the asm routine to draw a line.

      The scientific wouldn't be interested in that... all they would care about is to prove that it can be drawn, but is left to the reader as an exercise to make it render fast...

      • Everyone who played with graphics a few years ago probably played about with optimising the line drawing program at some point :)

        Btw, I worked out how denther got it faster than the fastest algorithm that they published - they set the mask to write to four seperate buffers simulatanously. I forget the details - it's been many years, I was very pleased with myself :)

  • by DarkHelmet ( 120004 ) <mark&seventhcycle,net> on Wednesday November 27, 2002 @02:09AM (#4765383) Homepage
    If you look at the bottom:

    Abstract

    NON-PHOTOREALISTIC VOLUME RENDERING USING STIPPLING TECHNIQUES

    This is obviously a compromise approach. There's no way this would be able to make photorealistic games.

    The difference between medicine and gaming is that with medicine, you have a real-life object whose structure whose PROPERTIES you're trying to recreate realistically, regardless of how off-color or computer generated it appears.

    With gaming you have an object that's computer generated, whose APPEARANCE you're trying to recreate, with lesser regard to the properties within that object. For instance, most gaming models consisting of polygons have hollow insides...

    People at Id don't bother to render and model the organs. People in medicine don't care about having models of human hearts bumpmapped or glossy.

    This is supposed to be news for nerds. What's with all the mindless generated hype?

    • On top of that, I don't think that stippling really applies to the gaming situation as a technique - they are trying to generate 3-D images given complex data sets based on x-ray transmission within the body. I have a feeling that generates points of information, and the old technique would be to either use voxels or translate the data into polygon information.

      Beside the fact that modelling such information for a game would be ludicrously time-consuming, I fail to see why this technique could offer an advantage to the display of 3-D graphics in a gamin sense - and I doubt it's actually faster in terms of the amount of time it takes to get mathematical data translated onto a cathode ray tube. All the article says is that it's faster than previous techniques in medical imaging. The article doesn't say what those techniques are, but since I can't for the life of me see how a CT scanner would get polygon information out of x-rays, I think we can all be sure that they aren't at all similar to what the Quake 3 engine is doing.
    • by good-n-nappy ( 412814 ) on Wednesday November 27, 2002 @02:55AM (#4765546) Homepage
      IIRC one of the biggest advantages of stippling in rendering surfaces is that you can get a fast simulation of transparency. Check out here [sgi.com]. So maybe the same applies in 3D. The 3D stippling might allow you to simulate complex semi-transparent volumes - perhaps also avoiding some z sorting or alpha blending.

      Also, maybe you WOULD see more of this in games if it could be done in real time. Just because all we have now is polygons doesn't mean that's the way it has to be.
      • "IIRC one of the biggest advantages of stippling in rendering surfaces is that you can get a fast simulation of transparency."

        Of course, IANADoctor, but I would imagine that this technique would also be very useful in so called "4D" [gemedicalsystems.com] ultrasound systems. Increasing the speed of rendered frames with transparency would allow visualization of fast motion such as heart valves in 3 dimentions and real time. I don't think this is possible with any method except ultrasound (spin relaxation times make MRI unfeasable and CT's have to image serially with individual slices by moving the patient).

        Incidentally, in my opinion, I don't think there many examples as perfect as this technology is of the Arthur C. Clarke quote of "Any sufficiently advanced technology is indistinguishable from magic." Think about it, if you were a person living just 200 years ago what would you think of a device that when touched to someone's body, allowed you to see what was happening inside live on a screen?
        • Coincidently, my wife just had an ultrasound. I had heard of these 4D things and looked into it and there was really only one in the Bay Area - Los Gatos, I think.

          Anyway, we opted for the normal ultrasound but I was extremely impressed with the process. The individual 2D ultrasound images do not do it justice. In the live version, I could definitely count fingers and toes and even reconstruct a lot of the 3D structure in my head.

          My point is that the radiologists are really, really good at spotting abnormalities in these grainy 2D pictures. So I'm wondering if these 4D ultrasounds aren't mainly just for parents. Not that that is necessarily a bad thing - but maybe it wouldn't be a huge leap for medical diagnosis.

          What I've gathered so far from all these baby books I've been reading is that most prenatal abnormalities cannot be fixed. When something can be fixed, the risks of fixing it are really high. So they are only even going to try fixing the very major problems. I'm guessing that ordinary 2D ultrasounds can probably pick up these major problems. Maybe this will change though as the risks of prenatal surgery go down.
      • I skimmed the vstipple website, and it appears to be a montecarlo algorithm: depending on the inensity of a voxel, it has a proportional probability of being drawn.

        The thing is, calling it "stippling" ignores the pre-existing term "montecarlo", which is precisely what is going on.

        Similar techniques have been used in the 2D world to generate half-tone images without the use of regular half-tone screens. I believe in that context, it is called generating pseudo random halftones. Floyd-Steinberg is a deterministic version of the same idea.
    • I think the key point is that transparency is slow to calculate, and when you can just simulate transparency by dithering/stippling the image then the computer can do it without too much work. That way you don't need a high-end workstation to visualise the scan, and the CPU cycles can be much better used elsewhere.

    • With gaming you have an object that's computer generated, whose APPEARANCE you're trying to recreate, with lesser regard to the properties within that object. For instance, most gaming models consisting of polygons have hollow insides...


      While what you're saying has been true traditionally, it seems to me that more and more game creators are (re-)discovering that it's the physics and structure of real life that give rise to the best appearences.

      IOW, the more appearances need to be real, the more real the models need to be. It turns out that more often it's actually simpler to model reality than to recreate its appearance in some other fashion. That's why research institutions and governments spend so much money on exabit computers to model reality better.

      [ramble]Hmmm... I wonder if someone has already done a proof that the mathematical calculation of a reality is more expensive than the reality itself... That it's more efficient to have actual reality than to try to model it on a computer. [/ramble]

      Mattcelt
      • While what you're saying has been true traditionally, it seems to me that more and more game creators are (re-)discovering that it's the physics and structure of real life that give rise to the best appearences.

        That's only one side of the issue. For large 3D objects, you're correct. Billboarding a la Doom is definitely a fawing trend. But appearance models are very popular in current 3D and not going away anytime soon.

        Think of texturing as an appearance model. Simulating the actual phenomenon of light hitting individual molecules is very heavy, but that's what you're going to need if you're trying to solve the true properties of a real-world material instead of just modeling its appearance.

        The same goes for using programmable texture for modeling fur, for example. Modeling the individual hairs one by one with polygons is computationally much more intensive, and the results aren't necessarily better, unless you're doing raytracing to account for the scattered light in the fur. Computers must become a lot faster (millions, billions, gazillions) before appearance models are going away.

        mathematical calculation of a reality is more expensive than the reality itself

        That may well be true. Luckily humans do not perceive the reality directly, so most of the information contained in it is lost, and modeling just what is perceivable (i.e. appearance) continues to be a justifiable approach.
    • I am guessing here but it seems to me that, for
      every CT scan frame it generates a few points
      depending on the resolution you are looking. So if
      you have a 3D object the CT scan will take many
      sections of it. Each section will show a lot of
      points. The thicker parts will have a higher
      density of points while a thinner part will have
      lower density. This will give the 3D effect. And it
      will do the generation a lot faster. When you want
      to take a closer look it will generate more points
      giving you a higher resolution picture.

      It will look similar if you have seen a fractal
      being generated using the point method. The longer
      you wait the better the picture is.
    • People at Id don't bother to render and model the organs

      Actually, I'd argue that 'rendering' (in the true abattoir sense of the word) is the whole point of Id games...
    • People at Id don't bother to render and model the organs.

      I guess you've never seen someone gut-shot in Soldier of Fortune II. :)

      (Not id, but Quake engine)

  • by Ryu2 ( 89645 ) on Wednesday November 27, 2002 @02:10AM (#4765387) Homepage Journal
    Essentially, they are just using a different primitive (point) instead of splat or voxel, traditionally used in volume rendering visualizations.

    Most of the complexity in volume rendering consists of preprocessing the data (alpha testing would be a simple way, other methods involve transformations into the frequency domain, etc.) to reduce the asymptotic complexity of the set to be rendered from the naive O(n^3) to something which corresponds to the actual visible set, not the actual rendering itself.

    I don't think they are doing anything different in this stage -- it's still the same dataset that needs to be worked with, after all.
    • First, I think this does qualify as a splat technique. Splatting is really just about using a single type of easily rendered object to visualise the model.

      Second, I don't think this is a volume rendering application so much as a rapid surface visualisation. As the data for medical imaging is often points from some sort of scan to start, being able to stay within the point framework can conceptually reduce the vertex decimation complexity. However, this is difficult to see how this applies here as the no reference to the work is directly provided in the article.

      Third, the "new" part about this is that they've successfully applied the concept to both NPR and imaging to produce really good medical images. The "novel" part is that medical imaging is always good for the "oooohs and aaahs"; almost makes me want to change my area of research :)
  • This Stippling effort wins best paper at IEEE Boston conference. Could real time medical rendering be whizzier than Id?"

    Lets see... ID is a company devoted to making games where you run around aimlessly killing and dismembering people.

    the people who developed the strippling effect are hardworking professions trying to help mankind by developing technology that could help further medicine.

    How can you even compare these two?

    A truly thoughtless comment.
    • by flikx ( 191915 )

      Not so. In fact, many of the developments first applied to games are usually directly applied to the medical field and other visualization fields.

    • Stanley Feinbaum, professional journalist. I have no tolerance for bad journalism!

      And you read Slashdot? ;-P

    • the world needs entertainment as much as it needs medicine. There is no proof that Id's games are damaging for society, it is more likely America's lack of gun control.

      The army has used Id's technology before for training simulators, so it can be said that their software DOES help America in at least one way other than entertainment.
  • "Ancient artists used a technique called stippling - in which pictures are created by painting or carving a series of tiny dots.....

    In stippling, also known as pointillism, the artist creates numerous dots with paint, ink or pencil to produce gradations of light and shade, forming an image."


    In geography class we used to do this to draw deserts on the maps. Funny though, we called it 'Lets piss the teacher the off' and the whole class would do it all at once. Oh well.

    This new application of it looks pretty promising. I used to work at a hospital and I saw the CT scans being done. This really would be a nice QUICK way to quickly see whats going on.
  • by Anonymous Coward
    Forget about hiring computer science majors! Just go to the strip clubs and start sending the headhunters!! Get the same performance for 99+% off!!!
    • Uh, actually, the strippers would probably cost a hell of alot more than the CS majors cost.

      Then again, most lonely CS majors end up at strip clubs at the end of the week, giving their hard-earned cash to boob girls. It all comes full circle.

      So I guess, in a way, you can say that ATI/NVIDIA already hired strippers, and they're doing a fine job.
    • by Anonymous Coward
      No, it's not just voxels. The first hint that it's not just voxels is that 'voxels' is a noun and 'stippling' is a verb. Medical data is routinely stored as high resolution voxel maps which take a long time to render. What these researchers have done is to come up with a way to reduce the data in a voxel map so that the remaining data renders quickly and is representative of the original 3d image.
  • Reading the article, stippling at first sounded awfully lot like what we call pixels nowadays.

    And then they say 3D stippling...so in other words, they reinvented voxels?

    -- Tino Didriksen / projectjj.dk
  • they'd eventually find a good use for stippling some day... Other than lunch money and tutorials that is.
  • by hackshack ( 218460 ) on Wednesday November 27, 2002 @02:15AM (#4765409)
    The trend in game engines is, as it has been in the past, largely towards better image quality. The stippling technique described in the article is a tradeoff for those who'd rather have the medical equivalent of "better framerates."

    That said, you CAN have sketchy-looking Quake if you want with NPRQuake [fileshack.com]. I've tried this and it looks incredible- it's a shame no commercial games have used this technique yet. Reminds me of that 80s music video where the gal walks into the mirror, and everything's all "pencilly-looking" but in real-time... now what was that damn song? (racks brain)

    Also check out Waking Life [imdb.com]. It's available on P2P as I write this, but you didn't hear that from me, and you're better off renting the DVD for all the extra goodies. It's not as pretentious as many make it out to be, and the visuals alone are worth it.

  • To read more... (Score:3, Informative)

    by DraconPern ( 521756 ) on Wednesday November 27, 2002 @02:20AM (#4765431) Homepage
    The actual papers are presented here [purdue.edu].
  • by Woogiemonger ( 628172 ) on Wednesday November 27, 2002 @02:21AM (#4765436)
    Rumor has it, doctors will soon be rendering a patient's internal organs with ASCII art.
    • "Rumor has it, doctors will soon be rendering a patient's internal organs with ASCII art."

      Though it isn't exactly an example of an internal organ, noted author Kurt Vonnegut has already made progress in this field. In one of his books (Cat's Cradle?) he makes use of the ASCII rendering technique to provide the reader with a detailed visual of a sphincter:

      *

      Whoa... just imagine how many polygons that would have taken!

  • by Lupulack ( 3988 ) on Wednesday November 27, 2002 @02:28AM (#4765455)

    This technique is meant to be a fast ( real time ? ) method of viewing medical data , like watching a CAT scan as it's happening. It's *not* attractive , it has no textures , it doesn't render the organs with all their colour or bump maps. What it *does* do is give the surgeon an immediate source of information on the status of the patient's condition. Very interesting stuff , good application of a technique to a real need. But it's not anything to do with Id. It *won't* make Quake 4 any faster.


    As most people know , including most Slashdotters ( I hope ) , 3D doesn't begin and end with Video Games. Other things use the technologies too.

  • ...from drawing an image as a collection of pixels ? Isnt the fact that 3D objects can be represented as collection of discrete pixels well known ?
  • by Temporal ( 96070 ) on Wednesday November 27, 2002 @02:35AM (#4765475) Journal

    From the article:

    Ancient artists used a technique called stippling - in which pictures are created by painting or carving a series of tiny dots - to produce drawings on cave walls and utensils thousands of years ago.

    Wow, think of what you could do with this! You could print grayscale images using only one color of ink, or color images using only three or four! No longer would we be limited to viewing images on expensive computer and television screens. We could actually print the images on a super-thin sheet of cloth or wood. We could call this new device "paper".

    Ancient artists sure were smart.

  • Could this be used in a FPS to map people's facial expressions in real time (well, less the appropriate lag). I'd love to be able to see the pissed off look on some guy's face who I've fragged three dozen times straight.
  • Awww who needs all that fancy shmancy software, just go hire a cave man!

    "This image of a human cranium was created with a new kind of computer-imaging software that uses the ancient technique of stippling to convert complex medical data into 3-D images that can be quickly viewed by medical professionals. Data from CT scans were converted into dots to create the stippled image. Cave dwellers and artisans used stippling thousands of years ago to create figures by painting or carving a series of tiny dots."

    See what I mean?

  • ever played q2 (Score:1, Informative)

    by Anonymous Coward
    /me points to the stipple alpha option in software mode...
    Anyone else remember this?
  • by Temporal ( 96070 ) on Wednesday November 27, 2002 @02:51AM (#4765532) Journal
    Could real time medical rendering be whizzier than Id?

    Probably, but not because of this. This technique would have very little use in a gaming environment. Indeed, algorithms indended for medical imaging rarely do. In this particular case, the dotted images don't really provide any sort of occlusion. That is, you can see right through the image to whatever is behind it. Great for medicine (where the whole point is to see inside the patient's body), bad for games.

    As a matter of fact, when I read this, my only thought is "well, duh". I do 3D graphics myself, and I am having a hard time believing that this technique is new. Particle system rendering? There must be something more to this that the dumbed-down article isn't telling us. Maybe they have a new, advanced algorithm for deciding exactly where to place the dots... that really must be it. As long as we're reporting on low-level algorithms, I have a new algorithm I came up with for drawing borders on the silouette edges of cartoon renderings efficiently. Do you want to hear about it? No? Aww...
    • My first thought on reading it was guessing that the amount of heavy computation required to turn raw imaging data into one of these pictures is much less than the amount required to transform the data to be fit to render in other ways. I'm betting the dots are much closer to what they actually get from their device.

  • uh oh (Score:3, Funny)

    by enneff ( 135842 ) on Wednesday November 27, 2002 @02:52AM (#4765537) Homepage
    "whizzier than Id"

    I really, really hope that was an unintentional pun.
  • How Nice (Score:1, Troll)

    by USC-MBA ( 629057 )
    I can' help but notice that this research has been funded by two government agencies [purdue.edu], NASA, and the National Science Foundation.

    While I for one am delighted to see that the usual low expectations of tax-dollar-funded research have in this case been confounded, I can't help but wonder how much genuine innovation has been stifled by the need for researchers to jump through the ususal hoops for their precious grant money, to say nothing of the frustration these researchers must feel as their hard work skips merrily off into the public domain.

    All water under the bridge, I supppose. I wait with delighted anticipation for some hot for-profit startups to get ahold of this software and, with the invisible hand of the market as their guide, take this technology (and hopefully my mutual funds! ^_^) to astounding new heights.

  • Downloadable (Score:5, Informative)

    by jki ( 624756 ) on Wednesday November 27, 2002 @03:03AM (#4765575) Homepage
    They have made the renderer available, here [purdue.edu] (win 2000 only). I don't think I have the interest to see further than just trying whether it works for me, but if someone does, please let us know if you find anything worth commenting :)
  • ...that they don't confuse this technique with "stapling" the next time you go into the doctor's for a thorough physical. Ouch.
  • by DrunkenTerror ( 561616 ) on Wednesday November 27, 2002 @03:16AM (#4765608) Homepage Journal
    This article sucks, and the /. write-up sucks more. It has virtually nothing to do with id, Doom, or games in general. They're visualizing data sets, not shooting rockets at each other at 60 FPS (or 8 fps in the Doom3 demo ;). Rendering static, previously collected data Vs. On-the-fly rendering of a rapidly changing dynamic environment.What should one expect from an anon submission, though? :P

    And how bout these amazing captions? They read like a typical /. dupe. (similarities highlighted)

    IMAGE CAPTION 1: This image of a human cranium was created with a new kind of computer-imaging software that uses the ancient technique of stippling to convert complex medical data into 3-D images that can be quickly viewed by medical professionals. Data from CT scans were converted into dots to create the stippled image. Cave dwellers and artisans used stippling thousands of years ago to create figures by painting or carving a series of tiny dots. More recently, 19th century Parisian artist Georges Seurat used the method, also called pointillism, to draw colorful, intricately detailed works. Because dots are the most simple visual element in a picture, they also are ideal for computer visualizations.

    IMAGE CAPTION
    2: This picture of a human foot was created with a new kind of computer-imaging software that uses the ancient technique of stippling to convert complex medical data into 3-D images that can be quickly viewed by medical professionals. In this image, data from CT scans were converted into dots to create the stippled image. Stippling uses tiny dots to create an image. Because dots are the most simple visual element in a picture, they also are ideal for computer visualizations.

    Oh well, at least their subjects and verbs agreed in number. (...data... ...were...)
    • It has virtually nothing to do with id, Doom, or games in general. They're visualizing data sets,

      So? What's your point? The author was just mentioning in a lighthearted way that computer graphics unrelated to games were getting some attention.

  • by bbc22405 ( 576022 ) on Wednesday November 27, 2002 @03:16AM (#4765610)
    I'm reading lots of comments about "how is this different from just plotting pixels?" and such. If you were given a voxel dataset, and were given the job of showing the internal structure, in a nifty, sorta-transparent, sorta-3D way ... you would likely fail.

    It is not as simple as it seems. You want the nearer bones (or whatever structure) to show up more, but not completely obscure what is behind. And you want the stuff behind to look "behind". But how?

    It is not the same problem as calculating normals of polygons to see which surfaces are facing the viewer, sorting things by depth, and finding out what is completely obscured by what else. Go back, and think again.

    I'm guessing (without reading the paper), that the point of using dots is that the dots are not infinitely small, but rather have a small measureable size, and so the nearer dots are drawn larger, but that all dots are small enough that they don't tend to "hide" each other in the Z direction, but rather "pile up" a bit to make the piled up places darker. This sort of "implementation" is interesting I think solely because one might be able to implement it in a way that makes use of fairly standard operations implemented by vroomy graphics hardware. (Ie. it is not otherwise an obvious implementation of the desired operation, and I'll guess that the initial reaction of the people who built the graphics hardware/driver is "hey, you're abusing it!", followed almost immediately by "wow, cool!". It's as absurd and wonderful as if you drew a cloud of smoke between you and another object by drawing each particle in the cloud.)

    • I'm guessing (without reading the paper), that the point of using dots is that the dots are not infinitely small, but rather have a small measureable size, and so the nearer dots are drawn larger, but that all dots are small enough that they don't tend to "hide" each other in the Z direction, but rather "pile up" a bit to make the piled up places darker.

      No. Having had a quick glance at the paper (available through the link to the renderer provided elsewhere), the techique centres around generating a number of points per voxel that varies according to the shading you want at that location in the image. Hence, the density of dots in any given region will vary in proportion to the "darkness" of the underlying data set in that region. The tricky part is working out the number/distribution of points that will produce a viewable image with all desired features highlighted, given that most images will be viewed at a resolution that would cause the object rendered to appear as a black smudge, even at a maximum point density of 1 point per voxel.

    • It is not as simple as it seems. You want the nearer bones (or whatever structure) to show up more, but not completely obscure what is behind. And you want the stuff behind to look "behind". But how?

      This is actually pretty easy. For each row of voxels running along the Z-axis from the "front" of the dataset to the "back", generate a sum of the densities of the voxels in the row. The end result is a two-dimensional greyscale image which you can then dither according to your preference of dithering algorithms.

      Since this makes rotation fairly costly, my guess is what they are actually doing is doing a simple transform of an n-bit three-dimensional array into a somewhat larger 1-bit three-dimensional array, with the dithering happening in actual threespace and then being frozen; rotation is then a simple matrix transform applied to a few tens of thousands of fixed 1-bit points -- a trivial operation with modern CPUs.

      My guess is that the graphics hardware is not some fancy 3D accelerated card at all, but just an ordinary 2D business desktop card.
  • Didn't Future Crew have a pixel based 3-D effect towards the end of the Unreal demo? Its was really cool for its time.
  • We were doing color 3-D Ultrasound in Japan 10 years ago...great fun. You recall the visible man and woman projects right? Checked them out lately?
  • Pointalism (Score:3, Funny)

    by bytesmythe ( 58644 ) <bytesmytheNO@SPAMgmail.com> on Wednesday November 27, 2002 @03:21AM (#4765625)
    Hmm... I've never been a big fan of pointalism. It looks like too much work.

    Now, a nice impressionist rendering would be great. Although, I'm not sure I'd want a neurosurgeon screwing around with my brain based on an artistic impression of it.

    "Well, see the giant green splotches represent perverted thoughts and... well, there isn't much else to speak of. Apparently, this small yellow part over here is occupied with programming, and it's slowly being invaded by a brown sludgy part which wants some more coffee. Overall, the painting's not worth much, and I certainly wouldn't want it hanging over my couch. Ok... let's make the first incision."

  • id did this already (Score:4, Interesting)

    by mewsenews ( 251487 ) on Wednesday November 27, 2002 @03:44AM (#4765666) Homepage
    Does noone remember Quake 2? The software renderer had an option called "Stipple Alpha" which would render transparent entities such as water and glass using a stippling method. It was much faster than true alpha blending, and it got the job done. Carmacks like four years ahead of the curve as usual..
    • The purdue scientists know what stippling is. They point is they found out how to transform voxels into 3d stipples that look good from multiple directions... not stippling in the screen coordinate sense. So how is Carmack suddenly a pioneer of medical imaging?
    • Carmacks like four years ahead of the curve as usual..

      No. Stipple alpha had been used in computer graphics LONG before then. Heck. the arcade game Hard Drivin' (1989) used it extensively.

      Carmack really doesn't innovate a hell of a lot in the graphics techniques he uses, and I think he'd agree. He's just always the first one to do his research, take existing high-end rendering techniques, and implement them really well on the PC.

  • You can map voxel and density rendering onto modern graphics hardware to get real-time volume rendering without too many problems. Furthermore, medical applications require high fidelity: you don't want doctors to miss some detail, and these kinds of stippling techniques greatly reduce the resolution.
  • Game Application (Score:2, Interesting)

    To shrug this off as having no bearing on the gaming world may be a bit narrow-minded. Remember, just because something is in 3D, it doesn't mean you want it to look like it's in 3D.

    For instance, the cel shading technique, by which specifically designed shading networks are applied to polygons in order to emulate the look of 2 dimensional animation. For the same reasons that technique has garnered so much popularity over recent years in the gaming industry, an application like this may find similar inspiration.

    In addition, this type of rendering goes beyond gaming, right into the entertainment industry. Art studios are constantly looking for new ways to present their animations. There have been several festival animations, done in 3D environments, that were purposely rendered in 2-dimensional ways. Who's to say you won't see this method used in the future?
  • This is the ancient dotball from the amiga demo time. I know someone would make good use of it. Don't see what the news is yet though.
  • The main application for this seems to be medical imaging. Do you think that this technique has any advantages over other visulisation techniques? i.e. Can medical professionals spot problems easier with these images?
  • Nah, nothing is ever going to be mightier than the monsters from ID.( Forbidden Planet [imdb.com])
  • Could real time medical rendering be whizzier than Id?

    No, because if it was worthwhile, John would have already used it. It's hardly an unknown technique. I forget the name of the company, but it was used about about 8 years ago by a french company in a 3D beat-em-up. The game mags described the effect as 'wibbly'.

    It's really hard to beat boring old polygons to get a reasonably convincing and solid look for not too much processor. Maybe something like this could be used to add detail around the silhouette edges so you don't see so much angularity there, which is and has been for a while, the weakest part of realtime renderings.
  • by Anonymous Coward
    by Grog the caveman under the Digital Millenium Caveman Act over 25,000 years ago.

    It is time to PAY Grog's estate and not STEAL the work of these innovators.
  • by jrstewart ( 46866 ) on Wednesday November 27, 2002 @05:32AM (#4765865) Homepage
    Note: I've read the linked article but not the actual paper.

    I think the reason for doing a stippled technique is to cut down on preprocessing of the data set, rather than to speed the actual rendering of the graphics.

    You're trying to visual a 3d volume, but you don't have a surface map. What you have is a series of bitmaps taken at different z-depths. You can either just render the z-ordered bitmaps with appropriate transparency (expensive for your graphics hardware) or you can try to calculate surfaces and render polygons. Calculating the surfaces can be extremely expensive (think hours of computer time on what was not too long ago a super-computer class machine).

    It looks like what they've done is find a way to render the bitmap data with (a) minimal preprocessing and (b) not needing hundreds of megs of video RAM.

    A few years ago I shared a VR setup with some people visualizng seismic data. They were always complaining about the onyx only having 256 megs of video ram...
  • by Anonymous Coward on Wednesday November 27, 2002 @06:59AM (#4766053)
    I was quite horrified at the number of comments I saw posted suggesting that generating a decent stippled 2d image from 3D volume data is trivial or somehow like "2d point sprites" or "reinvented voxels".

    I felt I should try and explain a few things to help the less, um, "graphics savy" among us appreciate and understand what's going on here.

    First, we need to know what "non-photorealistic rendering" (NPR) means. It is NOT (as some fool mentioned) a "tradeoff" and inferior to photorealistic effects. It only implies that the rendering does not use a light-transport model to acheive its results (hence not realistic in terms of how lights/cameras work).

    This is a GOOD thing. In a CAT scan you don't want a picture of the guy's head. You want an image giving you useful information about the internal structures of the guy's head. Photorealistic rendering would be as useful as taking a picture your patient.

    The problem of getting a useful image from a large volume dataset is non-trivial. Doing this at interactive rates is even tougher. Further, drawing realistic stipples is difficult in its own right, because of the nature of the stippling technique (spacing and distribution are used to convey transparency, as well as contour).

    The images produced by this technique are amazing, and look very close to what one might imagine an artist would produce for a textbook. That's incredible, people! If you compare these images to those that doctors currently have to look at (slices, color coded density maps, etc) you'll notice that the stipples are much easier to understand, and look very natural.

    Congrats to the Purdue team and kudos to Slashdot for covering a real comp sci paper, despite the fact that the yokels in the groups think that it has something to do with Quake 2. (Groan)

  • Sounds a lot like splatting, a common 3D volume rendering technique.
  • by account_deleted ( 4530225 ) on Wednesday November 27, 2002 @08:03AM (#4766158)
    Comment removed based on user account deletion
  • by dpbsmith ( 263124 ) on Wednesday November 27, 2002 @09:29AM (#4766521) Homepage
    ...My, those sample pictures have a wonderful old-timey look to them.

    For many decades, stippling was the standard technique used for rendering biological or medical illustrations. I suppose it has something to do with the printing processes used for line art being cheaper than those for half-tones.

    Indeed, I see that this journal [www.uqtr.ca] and perhaps others still say "Use 'stippling' and 'hatching' techniques to achieve tonal quality. Avoid the use of shading (pencil, wash, or airbrush) for a tonal effect..."

    Now, if we just had a font that reproduces the look of Leroy lettering?*

    *(OK, OK, a Leroy lettering set consisted of a sort of stencil, in which the letters were merely engraved deeply rather than perforating all the way through, and a little pantograph device. The pantograph had a technical pen and a tracing point. As you followed the stencilled letters with the tracing point, the technical pen would make corresponding motions on the paper. Very common for captions in technical illustrations in research papers, museum displays, etc. Obviously too neat to be handwritten, yet obviously not typeset...)
  • Sun had a 3D volume rendering package in circa 1984 called SunVision(?). Point rendering was an option. Fast, but not as pretty.
  • ...came out, either Pixar or Industrial Light and Magic (don't remember which), was working on a data visualization package that displayed MRI data. The demo was pretty impressive - it was a 3d rendering of someone's pelvic region, white bones, pink kidney, etc. It was as if you were looking at a digital invisible man.

    At a guess, the technology didn't go anywhere because it was too slow to be of much use.

  • Stippling is not exact. At no point did "cave dwellers" specificy that there had to be a point at "x,y,z".

    This is an advantage to doctors and medical professionals in that they don't care if an image is perfect, but they would benefit greatly from an improper image that more clearly represents information in real time.

    Dots are simpler in 3d, especially for this, because everything is purely ratios and percentages, rather than exact formulas for edges of lines.

    As stated in a previous post, the goal for this is the PROPERTIES, not the graphical capabilities. Have you ever seen a sonagram? Those images are extremely cryptic, but with a trained eye, they are extremely useful because they depict the truth in not an exact way, but a recognizable way.

    We're human, we can use our brain to interpret this stuff.

  • Somebody keeps taking my stippler

    Bill said I'm supposed to have my own stippler

    I'm going to set the building on fire
  • I didn't read the article, but looked at the paper..

    This technique claims to reinvent neither voxels nor point sprites, but it's rather a way to convert voxel datasets (like CT scans) into a threedimensional stipple representation. Imagine a particle system with the particle density at a given place about proportional to the density value in the voxel data at that place.

    Ok, it's not that easy, there are a few algorithms described which allow for more "intelligent" placement of particles to enhance features and the general transparency, which makes up the main part of the paper.

    The trick is then that these particles can be sent to a modern graphics card (GF3 and up) for immediate and fast display. You can zoom and rotate the model and your camera without having to recalculate anything, which makes rendering really fast.

    A few future thoughts in the paper are about using vertex shader programs to shift some of the per-particle calculations to the GPU which would anhance rendering speed and clarity further, as now angle-dependent particle calculations would be possible without any significant overhead.

    I didn't find any real thoughts on decreasing the complexity of the precalculation process tho. Still, having to calculate the major part only once per dataset instead of once per frame is a significant improvement over similar methods (such as most triangulation methods which require a complete re-calc as soon as you try to look at other features of the dataset).

    And the comparison with Id? Come on, this doesn't have ANY relationship with 3d games, and furthermore Carmack is a good engineer who knows to deploy new rendering techniques at exactly the time PCs are good anough for them, but all of his techniques since Wolf3d were invented years ago by the very bunch of PhDs etc who also wrote this paper :)

    For demos of course... hmmm... this could actually make metablobs look good ;)

What is research but a blind date with knowledge? -- Will Harvey

Working...