Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software Science

3D Visualization Moves Forward 176

Chris writes "Showing for the first time at the Society for Information Display (SID) conference in Boston was a three-dimensional display with 100 million volume pixels or "voxels". The Perspecta is a hardware and software combination that projects 3D images inside a 500 mm transparent spherical dome. Images 250 mm in diameter can be seen from a full 360 degrees without goggles, allowing the viewer to walk around the image. It can be used to visualize protein structures and to plan surgical and radiation treatment by locating the exact position of a tumour on an x-ray or mammogram. It could also be used in air traffic control, prototype designing and security scanning of luggage. Perspecta uses Texas Instruments' digital light processor technology and a spinning projection screen, which sweeps the sphere." We've done some previous stories about this globe from Actuality Systems. The trend seems to be toward simulating 3D with high-resolution flat screens, though.
This discussion has been archived. No new comments can be posted.

3D Visualization Moves Forward

Comments Filter:
  • by Anonymous Coward
    http://www.opendx.org

    Data Explorer. Formerly from IBM. Runs on Linux, and Windows.

    3D protein modelling for the rest of us with only flat-screen monitors.
  • Sweeet (Score:3, Insightful)

    by ZaneMcAuley ( 266747 ) on Friday May 24, 2002 @01:48PM (#3580595) Homepage Journal
    Now if only they can get it larger than a snow globe.
    • Just goes to show the influence that sci-fi has on development of technology. I remember watching Star Wars a New Hope and having that deathstar model floating in mid-air and all the communications where done with a holo-projection like device. Too bad this isnt an open-device (its contained within a globe).

      I wonder who gets credit for this idea, especially since artists steal from each other.

      slaps you on the head with a smelly trout
      • Actually, you could remove the glass dome, but then you would have to be VERY careful not to get whacked by the screen spinning at 1440 RPM. Best to leave it on.
  • by Dick Click ( 166230 ) on Friday May 24, 2002 @01:48PM (#3580601)
    It can also be used to show to a group of people the design flaw in the Death Star.
  • The trend seems to be toward simulating 3D with high-resolution flat screens, though

    Displays that require the user to wear glasses aren't what I'd call and adequtes solution. There is a big difference between having to stand on one side of a flat 3D rendering and being able to walk around a real 3D image.

    • Re:Glasses suck (Score:2, Interesting)

      by kevinmik ( 465713 )
      Displays that require the user to wear glasses aren't what I'd call and adequtes solution

      I believe what he was refering to were the screens that actually create a 3d image without glasses. They're really impressive. They do this by having two hi res lcd's, one on top of the other, that have a slight offset in the image between the two. The top one is translucent and the difference in the two pictures creates the illusion of 3d without having to use glasses or anything. It's like you've got your own holographic monitor!

      I just can't wait until they come into my college budget price range (yeah, like that's gonna happen :)
      • There's better than that...

        Autostereo displays [cam.ac.uk]

        I saw this 5 years ago, and it was extremely cool. They played a variant of Pong where the ball went in and out from the plane of the screen to a plane that appeared to be more or less level with the back of the monitor. They also used a cool 3D mouse so you could move your bat. It was totally realistic stereoscopic 3d without any goggles or anything.

        What you're describing sounds like you could render both the bats easily but you couldn't render the ball because it would be in-between the planes of the two LCDs. The autostereo system handles all intermediate distances just fine.
      • kevinmik wrote:
        I believe what he was refering to were the screens that actually create a 3d image without glasses.
        From the link given in the slash post (http://news.com.com/2100-1001-922122.html)
        The setup uses the same principles as those employed by an old 3D movie theater, Bresnahan said. Indeed, the system comes complete with red-and-blue-lensed glasses.
        -- Jack
  • by locating the exact position of a tumour on an x-ray or mammogram.

    Porn is the killer app. First the next-generation Internet and now this.
  • Volume Graphics [volumegraphics.com] has some interesting software to go along with the kind of display hardware talked about in the article. With VGs stuff you can "capture" a three D displayed image based on voxels and slice-and-dice it...
  • /.'d already! :(
  • by Bowie J. Poag ( 16898 ) on Friday May 24, 2002 @01:56PM (#3580648) Homepage


    Ok, its cool and all.. yeah, being able to project something volumetrically, but is it really _useful_ ? I fail to see how paying $20,000 for a bleeding edge "display sphere" makes more sense than rendering something in stereo, and crossing your eyes, which most visualization packages are capable of doing nowadays anyway.

    Where I used to work, we had a number of visualization packages that allowed researchers to view molecules/proteins/DNA sequences in stereo. It was routine, and required no specialized hardware.. You just render two views of the same object, side by side on the screen, with one view taken slightly from the left or to the right of the other. You can manipulate them in realtime, in stereo. Doesnt require glasses.. Just have to cross your eyes. Hell, go visit my site, i've got a couple stereoscopic wallpapers up, and theres nothing stopping me from producing stereoscopic 3D animation in Blender.

    :)
    Cheers,
    • by Riskable ( 19437 ) <YouKnowWho@YouKnowWhat.com> on Friday May 24, 2002 @02:09PM (#3580721) Homepage Journal
      Ahh, "cross your eyes". Therein lies the problem. You see, there are millions of people all over the world that don't see perfectly out of both eyes. I am one of these people (legally blind in one eye). To us, steroescopic images like the ones you describe will never be more than a blurred picture or static on the screen.

      Also, the angle of view on stereoscopic images is usually very limited. Technologies such as this get around that problem by projecting the image onto a curved surface which provides for more of a "true" 3d-look.

      The real benefit of technolgy such as this is that we're one step closer to the 3D "JAWS" shark that Marty McFly encounters in Back To The Future 2 =)
      • Ahh, "cross your eyes". Therein lies the problem. You see, there are millions of people all over the world that don't see perfectly out of both eyes. I am one of these people (legally blind in one eye). To us, steroescopic images like the ones you describe will never be more than a blurred picture or static on the screen.

        Dude, you can't see in 3D anyway...at least not through retinal disparity, so how would the sphere display help you out?

        There are only two cues the human brain has to perceive 3 dimensions. One is the relative size of an object, assuming you have some idea how big it should appear at a given distance--focusing your vision is a part of this aspect as well.

        The other is the slight difference in image perceived by each eye--called retinal disparity.

        If you are blind in one eye, you'll never have anything better for depth cues than the relative sizing deal.

        I don't see how this sphere would help you.
        • by sab39 ( 10510 )
          There are only two cues the human brain has to perceive 3 dimensions. One is the relative size of an object, assuming you have some idea how big it should appear at a given distance--focusing your vision is a part of this aspect as well.

          The other is the slight difference in image perceived by each eye--called retinal disparity.


          Uh, no. There's (at least) one other:

          The way the view you're seeing changes as you move your head from side to side and/or up and down.

          I'm guessing, but I'd be surprised if this wasn't the reason why most basketball players seem to deliberately bend at the knee (moving their head up and down) before shooting a free throw. Stereoscopic vision doesn't help you perceive how far away a horizontal line is.

          For people blind in one eye, I imagine that moving the head becomes much more important as a depth cue. And this system provides that, where stereo images don't.
          • I was going to mention this, but you beat me to it. The type of 3D that I see *IS* realative to movement. The easiest way to explain this to someone is to have them cover/close one eye and look at a photograph. Now keep both eyes open and look at the photograph again. It doesn't change your perception of the photograph because it's just a flat 2D image.

            However, if you were to cover one eye and look at a 3D object, you get a totally different sense of that object--despite the fact that you're not viewing it with retinal disparity. This is because of the movement of your head/body (no matter how still you think you can sit, you're still moving enough to give your eyes an image)
            • If I cover one eye and look at an object it looks exactly the same, unless I cover my good right eye, then it is blurry. I can still sense 3d fine. I just reached and grabbed very small things with my finger and thumb at varying distances with no problem. However, I have NEVER seen a stereoscopic image. And believe me I've used the techniques everyone says. some people just can't see them.
            • by vrt3 ( 62368 )
              However, if you were to cover one eye and look at a 3D object, you get a totally different sense of that object

              To be honest, I don't. I get exactly the same sense whether I use one eye or both. Reason is that in most circumstances only one eye at a time is focused. My left eye is short sighted, while my right eye is far sighted. Very strange, and as an effect I'm bad at perceiving depth. Good enough for normal life; I have no problems driving a car, but it makes parking more difficult.

              I can see stereographic images, but it's very hard to get both of my eyes focused simultaneously (up to 1 m I can see sharp with both eyes, further away only with my right eye; depends on lighting conditions though, on a sunny day it's clearly better). Looking trough a binocular or stereomicroscope gives me full 3D view, since they allow me to adjust the focus separately for each eye.

              3D techniques that rely on both eyes getting a different image won't have much effect for me, unless the screen is in the range where both my eyes can adapt and focus enough. Or maybe I should get glasses.


        • You also get a cue from the degree your eyes are crossed. If you are looking at something far away, your eyes are fairly parallel, close up they are "crossed". Your brain monitors the angle and it helps you guess distance.

      • Speak for yourself. My vision is hella messed up in my right eye and I can still see stereoscopic images....

        Of course, if you're totally blind in one eye than you can't see stereoscopic images, but if you're totally blind in both eyes you can't see any images, so we should stop developing all display technology.

        Tim
        • no, the point is that the globe display is better than images.

          why should I have to adapt to technology by doing something fairly unnatural like crossing my eyes?

          the original skeptical poster seems to be more concerned with the expense than the benefit of the technology. Very in keeping with the times, and also very short-sighted.
      • Hear hear!

        I have an astigmatism in one eye, and even WITH my glasses/contacts, my right eye is severely dominant. Given that, anything that relies on stereo vision (3d movies, "hidden picture" posters and etc) are lost on me.
      • You know that was the first thing I thought when I read the article, was someone who lost vision in an eye, but then as I read it I realized that it's better than most 3D displays because you don't need special glasses. Wasn't the big point, the excitement, the breakthrough of the thing that you don't need special 3D glasses to view the image? So even with one eye you could see an object in the sphere, not depth but still...objects. It's still a pretty cool display.
    • Reading your comment made me wonder if when they were developing CRTs for computer displays, somebody said: "Aaaaaand, why bother developing this in the first place, when we can just display computer output on a line printer, without using any specialized hardware? I fail to see how spending $20,000 for a bleeding edge 'display screen' makes more sense than outputting to paper, which most packages are capable of doing nowadays anyway."

      I'm sorry, but I can't help but view this type of argument as anything other than anti-progressive and monumentally shortsighted. I for one (and I know I'm not alone) have never found the "cross-eyed" technique comfortable or intuitive, and even when it works, the resulting depth perception is nowhere near as good as looking at a real object. A true volumetric 3d display would drastically improve the user visualization experience. You want surgeons to rely on crossing their eyes to accurately perceive a high resolution model of your brain when performing surgery? Obviously, it's not as economical as using a stereogram, but it also won't cost $20K forever either. Does it "make more sense"??? In the long run, absolutely.
    • If we all had that attitude then we probably wouldnt have TV, or even the computers we are sitting in front of right now.

      I bet the viewers of the first TV sets, with their round screens and flickering display, thought the same thing.

      Most of the great technologies that we are using nowadays started off as expensive prototypes with poor performance. Look how far we've come tho.
    • Because with the "display sphere" you'll be able to see all sides of an object. Unlike your "stereo" images, were you will only be able to see one face/side of an object.
  • that's a cube of 464*464*464 pixels. It's a great start, but i'd rather have a Radeon 7000 :)
  • by Gannoc ( 210256 ) on Friday May 24, 2002 @01:57PM (#3580659)
    I read that, and smelled something fishy. That was almost certainly some marketing moron's idea to mention, since I really doubt the company spent years to develop a hugely expensive three dimensional display in order to SCREEN LUGGAGE.

    I despise any company that twists the truth to take advantage of people's fears.

    • I read that, and smelled something fishy. That was almost certainly some marketing moron's idea to mention, since I really doubt the company spent years to develop a hugely expensive three dimensional display in order to SCREEN LUGGAGE.

      I despise any company that twists the truth to take advantage of people's fears. Pretty much everyone is doing it, it is the latest sales tactic for all sorts of things - most of which are completely unrelated to security. Suddenly a CRM package is being described as a way to "profile and track terrorists", a word processor becomes a "terrorist profile authoring device" etc etc.

      One of the amazing things about the US is how quickly good-taste goes out the window when it gets in the way of a sale.

      • Dude. Suddenly, on September 12, a lot of different airports decided they would really like to be able to visualize the luggage that passed through their screening points better than the 12" black and white gave them. Actuality saw an opportunity to fit the niche. Are you saying it's in poor taste to sell a product you've developed to people that want to buy it?

        As it happens, there are relatively few places where this technology is particularly useful right now. Screening luggage happens to be one of them. Cut the guys some slack.
    • it is A use. not THE use.

      nobody developed the computer so I could play solitaire.
  • "voxels". (Score:1, Insightful)

    by SkulkCU ( 137480 )

    In case you were wondering, the term for 'the need to create new words and catchphrases' is "stupid".
    • You might want to do a quick google search [google.com] before assuming that a term you've never head before was just made up by a marketroid.

      Andy

    • the term voxel goes back many years:
      First mention in 1987 on Usenet
      http://groups.google.com/groups?hl=en&lr=& as_drrb= b&q=voxel&btnG=Google+Search&as_mind=12&as_minm=5& as_miny=1981&as_maxd=24&as_maxm=5&as_maxy=1987
    • Re:"voxels". (Score:3, Insightful)

      by DataPath ( 1111 )
      the term voxel is hardly new. I recall reading about 3d projections and displays and them using the term "voxel" in association with them close to 12 years ago. And it's not any stupider than calling a 3d square a "cube."
    • This term was not made up by marketing and was not made up yesterday. It refers to a "volumetric pixel" and has been around for more than 50 years!
      These days it is a very common term used in volumetric imaging. Too often people only talk about polygonal rendering, but visualizing clouds of points is very important in such fields as seismic and medical imaging.
    • In addition to the comments above, you might also consider doing a quick study on how languages develop over time, especially English.

  • by kwashiorkor ( 105138 ) on Friday May 24, 2002 @02:12PM (#3580738)
    The Fortune Tellers Association of America called. They want their idea back. They're claiming "patent infringement" or some such.
  • This could revolutionize the pr0n industry!! Just think of the implications... (and where you could zoom).
  • Time to start investing in 3D pr0n companies!

  • Holodeck here we come!
    • umm.. I think this tech needs a little developement before it's ready for use in holodecks. Something about a screen spinning at high RPM... "Activate Holodeck 4, Mr. Data.. and hit pureé"
  • It can be used to visualize protein structures and to plan surgical and radiation treatment by locating the exact position of a tumour on an x- ray or mammogram.

    Finally, something to get geeks interested in breasts.

    RMN
    ~~~
  • 100 millions sounds big, but you have to take the cube-root of this to get the resolution in one axis - a whopping 465. In 2D, this is like a 465x465 display; not terribly exciting. This is the "curse of dimensionality" volume graphics needs to deal with.
  • But... (Score:1, Redundant)

    by mshiltonj ( 220311 )
    Will it let you see the Death Star from the other side of Yavin befor Massassi gets within range?
  • "The Perspecta is a hardware and software combination that projects 3D images inside a 500 mm transparent spherical dome."

    I was tired of being told that renderings were in 3d... yet projected onto a flat screen. Yes, you can rotate it around in your program, but when you put it on a flat screen, it's really not 3d, it just conveys the idea of being 3D.
    • Nono, its true 3d. I saw an early prototype of it on the Discovery channel. Its hard to describe, but its sorta like how Disneyland makes their Haunted House ghosts appear 3d. They project images off a mirror and through a sheet of glass, and so it gives a true 3d effect where anybody can view an object from any angle all at the same time.
    • Um, did you look at their web page?
      It's at www.actuality-systems.com. It actually is
      in 3D -- the "flat" screen rotates and you see
      it as a real 3D image that actually occupies a volume.
      • Um, did you read his post? You know, the one titled "Finally!" that says "I was tired of being told renderings were 3d"...

        He knows this is real 3d, and he's happy about it.
  • Not enough for ait traffic control imho.
  • by Uller-RM ( 65231 ) on Friday May 24, 2002 @02:31PM (#3580842) Homepage
    I did a contract coding job similar to this about two years ago - for an exhibit at a tech expo, we rigged up a pair of curved mirrors and a plexiglass semisphere with a hinged hatch. A projector shot a 1024x768 image through the pair of mirrors, producing an image that gave you roughly 270deg FOV horiz and 90deg vertical. Add a joystick and a rudimentary tunnel shooter... :)

    My part of it was hacking up the game engine (Virtools' kit) to render from two in-game viewpoints each frame and distorting the image in a third rendering pass so you'd get a correct image on the screen - lots of optimization, since we were rendering at that resolution two years ago when the Geforce2GTS and 1GHz P3s were the height of consumer technology. That, and some other blocks for the scripting language for level transfers and whatnot.

    (The engine used to be marketed under the name Nemo, now called just Virtools Dev. Not too impressive graphically by today's standards, but it has the most artist-friendly scripting system I've EVER seen. If they strapped a decent rendering tech onto it and some network code, they'd have an absolutely outstanding project on their hands.)

    • Render from two in-game viewpoints each frame and distorting the image in a third rendering pass so you'd get a correct image on the screen

      As a coder, what did the third pass do?
      • After compositing the two camera images in the framebuffer, it did a read from the back buffer into main memory (specifically, into a block in the heap with buffer space above and below to get a power of 2), was uploaded back to the card as a OpenGL texture, and rendered bilinear across a precalculated set of vertices and UV coords (stored in vertex arrays and display lists). If we were running it on a Voodoo, I could have used Glide to do it all on the card, but this was right when they were ditching Glide, and the rendering engine didn't have a Glide plugin :-\

        I would have liked to work out an entirely 2D routine for doing it instead of having to do the texturing, but I was under a rather nasty timeframe to do it, and the example code we got from the inventor of the technique took some time to decipher. Why people insist on doing horrible pointer arithmetic instead of [] (which expands to *(x+i) in the preprocessor anyways), I'll never know.

        On a Geforce2GTS the fill rate and AGP2x limited us to about 20fps, but on an SGI NT workstation it absolutely FLEW since main memory and video memory were shared. Unfortunately, we had to do 10 kiosks with this, and the budget didn't allow getting an SGI for each one :-\
  • I don't think this would work to well for ATC. You need to be able to see large amounts of infomation and viewing it in 3D would be more of a hinderance than a help. besides controllers viewalize it in there minds eye anyways. Not to mention they are a weird breed they hate change ... that why they haven't switched to QWERTY keyboards they still use some funky propritary setup (US).
  • Dragon's Lair! (Score:1, Offtopic)

    by teamhasnoi ( 554944 )
    at last I can finish my Dragon's Lair cocktail cabinet! I will only charge 1,000 tokens per game, just like the original!
  • by SeanAhern ( 25764 ) on Friday May 24, 2002 @02:56PM (#3580948) Journal
    The trend seems to be toward simulating 3D with high-resolution flat screens, though.

    These are completely different technologies. The first is an "actual" 3D display. The voxels have a true location in 3D space, for instance. People can view it from any angle with no equipment.

    The second appears to be just a large screen. People wear shutter or polarized glasses to send different images to the left and right eyes.

    While the second techology is great, especially for high-resolution display to a single person, it really is annoying when used with multiple people with different locations in space.

    Since there is only one set (left and right) images on the flat screen, only one viewpoint can be chosen. If a group of people is sufficiently far from the screen, or sufficiently close together in the room, it's fine. But if you let the people wander around the room, you start getting perspective problems that really make collaborative viewing troublesome.

    I have a feeling that we will be seeing voxel-based visualization like the one mentioned in this post more and more often. It's just more natural to use.

    As someone who is in the field of high-resolution scientific visualization [llnl.gov] (that's me on the left), I certainly hope that technology will move in this direction.
    • Since there is only one set (left and right) images on the flat screen, only one viewpoint can be chosen.

      I know that at least at Phillips Research Eindhoven they are working on flatscreen displays, which do not have the disadvantages you mention. They can be viewed by multiple people simultanously without a problem, and without goggles.

      The technique is actually quite simple: take a high-res flatscreen, and stick a prisma sheet over it, so that depending on the angle you look at the screen, you will see another pixel.

      At the lab they did have a prototype (1600x1200 as I recall) with 9 distinct views on it (thus reducing the apparrent resolution to 533x400). Really cool to see 3D on a flat(!)screen, without goggles.

      Those 9 views were actually all horizonally distinct, otherwise the effect wouldn't be really well visible. So it doesn't work when you keep your head rotated, or move your head up&down.

      (no link, sorry, they are not public about it. I saw it almost a year ago now, guess they must have improved since then)

  • I really am amazed.

    Years ago I just didn't believe we'd really see volumetric 3D because the jump from (say) 640x480 to 640x480x480 just seemed too wide.

    If we can get a stationary image 400 x 400 x 400 image for $20,000, it doesn't seem all that much of a stretch to 400 x 400 x 400 x 30 frames per second... or from there to 1600 x 1600 x 1600 x 30 frames per second... or from there to 1600 x 1600 x 1600 x 30 frames per second for $200.

    And disk speeds and Internet speeds are coming along just fine...

    • Umm... The leap you speak of is a considerable jump.
      400x400x400 stationary is 64 million voxels, no refresh requred. Seems simple enough
      400x400x400x30fps is 4.096 billion pixels per second. Already we've bypassed current COTS video (with mere 1-2 gigapixel refresh).
      With Moore's Corollary squared holding true for video cards (refresh rate doubles approx every 9 months) It will take us 9 months to reach that, still not too far off, March 2003.

      Now, for a stationary 1600x1600x1600 image we need 1.92 billion voxels, and for a 30fps image at that resolution we need 122.88 billion voxels per second.
      This is a jump of 2^6 from our present 2 gigapixel limit. This means, assuming the corollary to continue to hold true, that we wont have this technology for a (minimum) period of 4 and a half years.
      Thats still pretty damn fast considering how long it took to get a computer into everyones home, or the time it took to push us from 720x348 to 1920x1600, but it's still quite a ways off...
      After that they still have to drop it down to $200; I for one will not hold my breath.
      • I'm not going to try to hold my breath for 4-1/2 years (plus however long it takes to get the cost down by a factor of 100).

        Actually I expect it to take longer than you do.

        But I have a good chance of seeing it, not merely within my lifetime, but even before my retirement.

        Considering that I once doubted that I would ever even own my own computer, that's not too bad!
  • I am curious...does any kind of projected Visualization exists which does not require a sphere or sum such projection media?

    By these I mean...think R2D2's projection unit, the panels used in Final Fantasy: Spirit within, or based on the trailer, the computer screen used by Tom Cruise in Minority Report.

    I was thinking projectors exist, but it has to project on something. I was thinking maybe you could have something such as some kind of mist which is projected on. Is it possible when projecting from two locations to have a 3d pixel (I think the term is voxel but I'm not sure)?

    Just curious...any additional links to these types of technology (and any terms referring to them to know what to look for) are greatly appreciated.
  • As some people have tried to discuss already - there is a large percentage of the population that can't actually see stereo (I could be wrong, but I think it is somewhere up around 15%) for one reason or another. For some people, the brain just hasn't learnt how to spacialise the stereo images produced by the eyes.. for others, even minor damage to one eye can cause problems with stereo.

    But anyway, that is besides the point - I think technology like this is limited (with the exception of entertainment) unless you can get your hands into the 3D image and use it as if it were truly virtual reality. It is one thing to look at a 3D image of a brain tumour, but if you still have to use a 3D-mouse to manipulate it, then there is still an intuitive leap to be made when using applications written for the display - just like the intuitive leap between the 2D mouse and the 2D display. What this needs is to be combined with technology like Reachin [reachin.se] [troll warning] where you can actually see the data in the same space where you hands are working on it. Much more appropriate :)

  • I'd like to point out that the space battles in the upcoming strategy game Master of Orion 3 [quicksilver.com] will be rendered using voxels, according to posts made by the programmers in the discussion boards (which are too poorly organized to search through quickly, but the posts ARE there.) Hopefully, this will not cripple the game.
  • I missed the Actuality story. This [livejournal.com] was my take on that (also included below). It doesn't look like the new version is much of a jump forward...

    Now, niftyneat as it looks, I see a few problems...

    First and foremost, you're going to be stuck representing solid 1-color materials, wireframes and ghosts with this. You're also not going to be able to make objects appear to be lit correctly. Why? Because the display has no idea what angle you're viewing it from. I'll explain.

    Hold your thumb in front of the screen. It's blocking some part of the display, right? Move your head back and forth a little, and it will block different parts. Raise your head up a little or drop it down and it will block different parts again. The thing is, the display has no idea where you're looking from, so every part needs to be visible at all times. It can't clip out bits that are behind other things like a traditional 2D display. The result is that if you show a screen full of text, and draw a thumb in front, you still see the text through the thumb. Both will appear to act like ghosts.

    Now, consider drawing a Coke can with a flashlight shining on the side. Again, it has no idea which side you're viewing from, so it's got to draw all sides of the can. The thing is, as you move about it, the logo on the front of the can shouldn't be visible when looking at the back of the can. Similarly, when you look at the side opposite the flashlight, it should be all dark. But since the display uses volumetric texels, it has no idea about the facing of each texel. Every texel's going to be drawn, so a you'd see the backward logo when looking from the back, and you'd see what boils down to a really confusing lighting situation when viewing from the non-flashlight side. It's like ghosts or colored X-Rays.

    If you're still with me, that covers the reason for no shadows or non-uniform dull, not-too-shiny surfaces.

    Next problem is - it's gonna be SLOW! Sad, but true! If it were a 3D bitmap representing equal units of a cube, that would be one thing. Unfortunately, it represents slices of a bitmap rotating through space.

    Now, let me say this: Computers hate round things. Arcs, swooshes, ribbons, none of these are much fun for a computer to draw (comparatively speaking), much less, to render into.

    Normally, polygon raster operations boil down to setting up a bunch of lines, one per scanline, and for each, figuring out how you progress across the line in measured, discrete steps. "I'm starting here in the texture, and I'll be there in the texture. I need to get there in 32 screen pixels, and I advance n units through regular steps of screen, texel and 1/z space." This tells the computer do the expensive calculation once, and just do 32 iterative steps to render the 32 pixels on that scanline. Any modern 3D engine is actually optimized to do the expensive stuff 1-2 times, creating the per-scanline numbers iteratively as well.

    The only places where this approach doesn't work are where you're clipping against the edge of the display area. Clipped triangles are traditionally an order of magnitude more expensive to render than non-clipped ones. So much so, that terrible tricks are used to avoid them or reduce them to categories of special cases that can be tackled to attempt to avoid reverting to a true clip. For example, many display systems actually create waste RAM in a border around the screen. If a triangle doesn't penetrate the waste area, the rasterizer will go ahead and draw (or pretend to draw) the dummy pixels. It's only in the case where triangles are partly on screen, but go even beyond the dummy area that the hideously expensive render functions are called. Drawing millions of pixels per second that you know the user will never see? That sure points to a problem!

    Enter the circular slice-based display space.

    Here, for every single pixel, you've got to find which bits of a render go through. Essentially, you have to clip against the front and back of every single triangle you render as you calculate each slice. You're taking the worst hit on every single triangle!

    What's even worse is that a single 'frame' (half rotation, assuming the rotating display plane is visible from front and back) consists of just shy of 200 renders. This means you're taking that 1% worst case scenario and repeating it 100% of the time, and repeating it about 200 times per frame. And because you're dealing with an arc for the rotational advancement (remember, computers like even, linear, discrete steps), you're dealing with curved surfaces instead of little cubes and the planes of a view frustum. Essentially, you're looking for the union of an arbitrary material and a stuffed piece of macaroni instead of merely finding the portion that fits within a little box. This makes the checks for pixel penetration several orders of magnitude more expensive and makes it even more expensive to attempt to reuse data from one slice to the next.

    Hee. Plus the display is connected to your PC via SCSI2W, which is also a not-too-minor detail. You've got over 100 million pixels to send across per 'frame'. Even if they're just 1-byte pixels (256 color), and partial updates, that's asking a lot of a dual-channel 20MHz(?) bus.

    Mind, we're still discovering things today which would have sped up rendering on our Commodore 64s. The computational cost will come down over time as more ways are developed for rendering in non-uniform/curved space, and as different spatial representation methods are explored. This is a nifty advance, certainly a step closers than the silly lenticular lens based 3D systems and the layered LCD-over-CRT approaches.

    Still. Think of a ghosty AutoCAD on a 286. Look, but don't touch. We've still got a ways to go before 3D games and movies become a reality.

    But don't get me wrong: It's a neat advancement, and it gives me hope. If I could borrow one, I think I'd make a noisy whirring ghost town snow globe. The shape just begs for it. And I'd love to get cracking on trying to find efficient algorithms for the unusual render space. *sigh.* $60k though. Maybe eBay can help me out on this one in another 20 years.

    • Yeah. No 3D quake with this.

      I agree with the inability to draw non-ghosty objects, since it can't render an opaque voxel, and it can't not render voxels.

      But the performance part wouldn't be that bad. At least, not quite as bad as you suggest. I think. I'm guessing a lot.

      So, let's say you've rendered the scene as a 3d bitmap. If you can solve the storage and bandwidth issues, this is actually nice in a couple ways. Like you don't have to step through 1/z space, since your x and y aren't perspective-adjusted. You're rendering directly into view coordinates. The equations for the gradients are simpler. Okay, that isn't going to help too much, especially considering that you have to render every voxel.

      But one thing that should help is that you don't have to clip to a sphere, or cylinder, or anything abnormal like that. Use a cube. Render a nice 3-d cubic bitmap. Then, for each circle slice, you just have determine which voxels in the bitmap are covered in that slice. That's nice and linear. Your 2-D bounding box would be different on each slice as it tracked the vertical faces of the cube, but those values could be pre-calculated as they are static.

      So while it would certainly be bad, because you can't do any of the nice tricks to reduce the -amount- of rendering, the rendering itself would basically be comprised of the same-ol-stuff, just triangles clipped against 2-d surfaces. Bad, but a problem solveable merely through bandwidth.

  • since this is spinning, it's not going to be 464^3, but more like refresh_hz*x*y of screen. but, in either case, 100e6 is not so many voxels these days. a more 'reasonable' case might be in the neighborhood of an order of magnitude more - but still hardly enough to do very good aircraft separation, etc. i mean, sure, it's nice and everything, it's 3d, no glasses, etc. but it's not super high res, though 100 million sure sounds like a lot. plus i'll bet that whirring noise thing gets to you like a dentist drill after a while.
  • given the way this thing works, shouldn't the title be:

    Science: 3D Visualization Moves In Circles

    ;P
  • The difference is one is simulated 3d, the other is really 3d.

    You can't change perspective on the flatscreen.. not like a hologram. What you see is what you see, until the software changes it.

    You can't peek around something or shift your point of view by just moving.

    This globe thing, you can look at, walk around, see things as if they were a solid image. Just like a hologram (but with a 360 degree fov of course)
  • I saw something smaller but essentially much the same when I was in RUSSIA for heavens sake ;) it just showed a spinning globe with some text orbiting around it but it still looked very neat.
  • so when do i get to visit the holodeck?!
  • A somewhat older rig for 3D visualization can be bought at Uncle Ira's Moser Electronics [meco.org] site, under the "Exotic Hardware" section.
  • I've read the posts to this thread and wanted to try and clear up a couple of things.

    1. This is NOT a two-dimensional display. It creates images that are as "real" as any other object, although made of light (think "Death Star attack briefing" here).

    2. This means, that you can perceive the dimensionality (is that a word?) of the displayed objects even if you are blind in one eye, are color-blind, have only one leg, don't like cats, etc.

    3. No glasses or other viewing "helpers" are needed. Watch the screen, Luke...

    4. This object works much like a traditional big-screen projecttion television...only the screen is rotating around the centerpoint of the globe at a considerable speed. The speed is fast enough that your eyes can't really see the screen, but can see the projected images.

    5. This is NOT a new concept. The new thing about it is that it is the LARGEST commercially one available to date (large as far as resolution). I've personally seen versions of this display that are about 3 feet (~1 meter) around that use a matrix of LEDs on a rotating board. Same effect, really....older technology....not as good as this one. Here's one of the google returns for this design [vdivde-it.de]. Another design that has had a good amount of research is to create images using lasers shot onto a spinning helix or screw shape [google.com].

    6. So far, resolution is the biggest hurdle. Next is refresh rate (it is, quite obviously tied to the rotation speed and to the speed and falloff of the light emmiting device (LED, Laser, phosphor, etc).

    6. It really is amazing to see it in person. Don't knock it. It will be in more wide-spread use in the near future and will be a huge leap forward for certain tasks. Most people will never see one (except maybe in a science museum) and will never need to use one, though.

    7. Using a wireless network, now you too can forward a holographic message from Obi-wan to the Senate! heh.

  • Everyone keeps saying that its resolution is about 464*464*464. It's not. Check the specs [actuality-systems.com]. It's 768*768 by 198 sections. The section facing you will have a resolution of 768*768, which isn't that bad. The four bit colour is a bit crap though. I look forward to watching where this tech goes. Looks cool.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (3) Ha, ha, I can't believe they're actually going to adopt this sucker.

Working...