3D Visualization Moves Forward 176
Chris writes "Showing for the first time at the Society for Information Display (SID) conference in Boston was a three-dimensional display with 100 million volume pixels or "voxels". The Perspecta is a hardware and software combination that projects 3D images inside a 500 mm transparent spherical dome. Images 250 mm in diameter can be seen from a full 360 degrees without goggles, allowing the viewer to walk around the image. It can be used to visualize protein structures and to plan surgical and radiation treatment by locating the exact position of a tumour on an x-ray or mammogram. It could also be used in air traffic control, prototype designing and security scanning of luggage. Perspecta uses Texas Instruments' digital light processor technology and a spinning projection screen, which sweeps the sphere." We've done some previous stories about this globe from Actuality Systems. The trend seems to be toward simulating 3D with high-resolution flat screens, though.
Do Your Own 3D Modelling (Score:1, Interesting)
Data Explorer. Formerly from IBM. Runs on Linux, and Windows.
3D protein modelling for the rest of us with only flat-screen monitors.
The hell with mammograms... (Score:1)
Re:The hell with mammograms... (Score:3, Funny)
Sweeet (Score:3, Insightful)
Re:Sweeet (Score:1)
I wonder who gets credit for this idea, especially since artists steal from each other.
slaps you on the head with a smelly trout
Re:Sweeet (Score:1)
Another use... (Score:5, Funny)
Re:Another use... (Score:1)
Re:Another use... (Score:2)
Re:Another use... (Score:1)
Re:Another use... (Score:1)
Re:Another use... (Score:1)
The Killer App (Score:1)
Glasses suck (Score:1)
Displays that require the user to wear glasses aren't what I'd call and adequtes solution. There is a big difference between having to stand on one side of a flat 3D rendering and being able to walk around a real 3D image.
Re:Glasses suck (Score:2, Interesting)
I believe what he was refering to were the screens that actually create a 3d image without glasses. They're really impressive. They do this by having two hi res lcd's, one on top of the other, that have a slight offset in the image between the two. The top one is translucent and the difference in the two pictures creates the illusion of 3d without having to use glasses or anything. It's like you've got your own holographic monitor!
I just can't wait until they come into my college budget price range (yeah, like that's gonna happen
Re:Glasses suck (Score:2)
Autostereo displays [cam.ac.uk]
I saw this 5 years ago, and it was extremely cool. They played a variant of Pong where the ball went in and out from the plane of the screen to a plane that appeared to be more or less level with the back of the monitor. They also used a cool 3D mouse so you could move your bat. It was totally realistic stereoscopic 3d without any goggles or anything.
What you're describing sounds like you could render both the bats easily but you couldn't render the ball because it would be in-between the planes of the two LCDs. The autostereo system handles all intermediate distances just fine.
Re:Glasses suck (Score:1)
pr0n (Score:1)
Porn is the killer app. First the next-generation Internet and now this.
Volume Graphics Software (Score:2)
Anyone got that cached? (Score:1)
Re:Anyone got that cached? (Score:1)
A copy of the article has not appeared here yet because it's Friday afternoon and all the karma whores are not on top of their game ;-)
Aaaaaand, why bother. (Score:3, Interesting)
Ok, its cool and all.. yeah, being able to project something volumetrically, but is it really _useful_ ? I fail to see how paying $20,000 for a bleeding edge "display sphere" makes more sense than rendering something in stereo, and crossing your eyes, which most visualization packages are capable of doing nowadays anyway.
Where I used to work, we had a number of visualization packages that allowed researchers to view molecules/proteins/DNA sequences in stereo. It was routine, and required no specialized hardware.. You just render two views of the same object, side by side on the screen, with one view taken slightly from the left or to the right of the other. You can manipulate them in realtime, in stereo. Doesnt require glasses.. Just have to cross your eyes. Hell, go visit my site, i've got a couple stereoscopic wallpapers up, and theres nothing stopping me from producing stereoscopic 3D animation in Blender.
Cheers,
Re:Aaaaaand, why bother. (Score:4, Insightful)
Also, the angle of view on stereoscopic images is usually very limited. Technologies such as this get around that problem by projecting the image onto a curved surface which provides for more of a "true" 3d-look.
The real benefit of technolgy such as this is that we're one step closer to the 3D "JAWS" shark that Marty McFly encounters in Back To The Future 2 =)
Re:Aaaaaand, why bother. (Score:2)
Dude, you can't see in 3D anyway...at least not through retinal disparity, so how would the sphere display help you out?
There are only two cues the human brain has to perceive 3 dimensions. One is the relative size of an object, assuming you have some idea how big it should appear at a given distance--focusing your vision is a part of this aspect as well.
The other is the slight difference in image perceived by each eye--called retinal disparity.
If you are blind in one eye, you'll never have anything better for depth cues than the relative sizing deal.
I don't see how this sphere would help you.
Re:Aaaaaand, why bother. (Score:3, Insightful)
The other is the slight difference in image perceived by each eye--called retinal disparity.
Uh, no. There's (at least) one other:
The way the view you're seeing changes as you move your head from side to side and/or up and down.
I'm guessing, but I'd be surprised if this wasn't the reason why most basketball players seem to deliberately bend at the knee (moving their head up and down) before shooting a free throw. Stereoscopic vision doesn't help you perceive how far away a horizontal line is.
For people blind in one eye, I imagine that moving the head becomes much more important as a depth cue. And this system provides that, where stereo images don't.
Re:Aaaaaand, why bother. (Score:2)
However, if you were to cover one eye and look at a 3D object, you get a totally different sense of that object--despite the fact that you're not viewing it with retinal disparity. This is because of the movement of your head/body (no matter how still you think you can sit, you're still moving enough to give your eyes an image)
Re:Aaaaaand, why bother. (Score:2)
Re:Aaaaaand, why bother. (Score:3, Informative)
To be honest, I don't. I get exactly the same sense whether I use one eye or both. Reason is that in most circumstances only one eye at a time is focused. My left eye is short sighted, while my right eye is far sighted. Very strange, and as an effect I'm bad at perceiving depth. Good enough for normal life; I have no problems driving a car, but it makes parking more difficult.
I can see stereographic images, but it's very hard to get both of my eyes focused simultaneously (up to 1 m I can see sharp with both eyes, further away only with my right eye; depends on lighting conditions though, on a sunny day it's clearly better). Looking trough a binocular or stereomicroscope gives me full 3D view, since they allow me to adjust the focus separately for each eye.
3D techniques that rely on both eyes getting a different image won't have much effect for me, unless the screen is in the range where both my eyes can adapt and focus enough. Or maybe I should get glasses.
Re:Aaaaaand, why bother. (Score:1)
You also get a cue from the degree your eyes are crossed. If you are looking at something far away, your eyes are fairly parallel, close up they are "crossed". Your brain monitors the angle and it helps you guess distance.
Re:Aaaaaand, why bother. (Score:2)
Of course, if you're totally blind in one eye than you can't see stereoscopic images, but if you're totally blind in both eyes you can't see any images, so we should stop developing all display technology.
Tim
Re:Aaaaaand, why bother. (Score:1)
why should I have to adapt to technology by doing something fairly unnatural like crossing my eyes?
the original skeptical poster seems to be more concerned with the expense than the benefit of the technology. Very in keeping with the times, and also very short-sighted.
Re:Aaaaaand, why bother. (Score:2)
Tim
Re:Aaaaaand, why bother. (Score:1)
I have an astigmatism in one eye, and even WITH my glasses/contacts, my right eye is severely dominant. Given that, anything that relies on stereo vision (3d movies, "hidden picture" posters and etc) are lost on me.
Re:Aaaaaand, why bother. (Score:2, Interesting)
Re:Aaaaaand, why bother. (Score:3, Insightful)
I'm sorry, but I can't help but view this type of argument as anything other than anti-progressive and monumentally shortsighted. I for one (and I know I'm not alone) have never found the "cross-eyed" technique comfortable or intuitive, and even when it works, the resulting depth perception is nowhere near as good as looking at a real object. A true volumetric 3d display would drastically improve the user visualization experience. You want surgeons to rely on crossing their eyes to accurately perceive a high resolution model of your brain when performing surgery? Obviously, it's not as economical as using a stereogram, but it also won't cost $20K forever either. Does it "make more sense"??? In the long run, absolutely.
Re:Aaaaaand, why bother. (Score:1)
I bet the viewers of the first TV sets, with their round screens and flickering display, thought the same thing.
Most of the great technologies that we are using nowadays started off as expensive prototypes with poor performance. Look how far we've come tho.
Re:Aaaaaand, why bother. (Score:1)
100,000,000 voxels: not as impressive as it sounds (Score:2, Interesting)
Re:100,000,000 voxels: not as impressive as it sou (Score:1)
Its a sphere with a radius of 768 pixels and
198 slices.
That means it is about 768*768*198.
Used in security scanning of luggage? (Score:4, Insightful)
I despise any company that twists the truth to take advantage of people's fears.
Re:Used in security scanning of luggage? (Score:2)
I despise any company that twists the truth to take advantage of people's fears. Pretty much everyone is doing it, it is the latest sales tactic for all sorts of things - most of which are completely unrelated to security. Suddenly a CRM package is being described as a way to "profile and track terrorists", a word processor becomes a "terrorist profile authoring device" etc etc.
One of the amazing things about the US is how quickly good-taste goes out the window when it gets in the way of a sale.
Re:Used in security scanning of luggage? (Score:1)
As it happens, there are relatively few places where this technology is particularly useful right now. Screening luggage happens to be one of them. Cut the guys some slack.
Re:Used in security scanning of luggage? (Score:1)
Re:Used in security scanning of luggage? (Score:2)
nobody developed the computer so I could play solitaire.
"voxels". (Score:1, Insightful)
In case you were wondering, the term for 'the need to create new words and catchphrases' is "stupid".
Re:"voxels". (Score:1)
Andy
Re:"voxels". (Score:1)
First mention in 1987 on Usenet
http://groups.google.com/groups?hl=en&lr=
Re:"voxels". (Score:3, Insightful)
Re:"voxels". (Score:1)
These days it is a very common term used in volumetric imaging. Too often people only talk about polygonal rendering, but visualizing clouds of points is very important in such fields as seismic and medical imaging.
Re:"voxels". (Score:1)
Re:"voxels". (Score:2)
It's amazing that the cap has really removed any incentive to post quality for me.
I'm about an hour away from starting even more accounts, except I'm convinced that qualifies as some sort of illness.
Uh hi.. yeah.. perspecta? (Score:3, Funny)
pr0n (Score:1)
w00t! (Score:1)
Time to start investing in 3D pr0n companies!
Ahhh yeah! (Score:1)
Re:Ahhh yeah! (Score:1)
Now geeks can share the interest (Score:2)
Finally, something to get geeks interested in breasts.
RMN
~~~
Re:Now geeks can share the interest (Score:3, Funny)
Re:Now geeks can share the interest (Score:2)
RMN
~~~
Re:Now geeks can share the interest (Score:2, Funny)
curse of dimensionality... (Score:2, Insightful)
But... (Score:1, Redundant)
Finally! (Score:1)
I was tired of being told that renderings were in 3d... yet projected onto a flat screen. Yes, you can rotate it around in your program, but when you put it on a flat screen, it's really not 3d, it just conveys the idea of being 3D.
Re:Finally! (Score:1)
Re:Finally! (Score:1)
It's at www.actuality-systems.com. It actually is
in 3D -- the "flat" screen rotates and you see
it as a real 3D image that actually occupies a volume.
Re:Finally! (Score:2)
He knows this is real 3d, and he's happy about it.
That would be 500x500x500 resolution (Score:1)
Re:That would be 500x500x500 resolution (Score:2)
It's a start.
"The death star has cleared the planet"
Did this about a year ago... (Score:4, Interesting)
My part of it was hacking up the game engine (Virtools' kit) to render from two in-game viewpoints each frame and distorting the image in a third rendering pass so you'd get a correct image on the screen - lots of optimization, since we were rendering at that resolution two years ago when the Geforce2GTS and 1GHz P3s were the height of consumer technology. That, and some other blocks for the scripting language for level transfers and whatnot.
(The engine used to be marketed under the name Nemo, now called just Virtools Dev. Not too impressive graphically by today's standards, but it has the most artist-friendly scripting system I've EVER seen. If they strapped a decent rendering tech onto it and some network code, they'd have an absolutely outstanding project on their hands.)
Re:Did this about a year ago... (Score:2)
As a coder, what did the third pass do?
Re:Did this about a year ago... (Score:3, Interesting)
I would have liked to work out an entirely 2D routine for doing it instead of having to do the texturing, but I was under a rather nasty timeframe to do it, and the example code we got from the inventor of the technique took some time to decipher. Why people insist on doing horrible pointer arithmetic instead of [] (which expands to *(x+i) in the preprocessor anyways), I'll never know.
On a Geforce2GTS the fill rate and AGP2x limited us to about 20fps, but on an SGI NT workstation it absolutely FLEW since main memory and video memory were shared. Unfortunately, we had to do 10 kiosks with this, and the budget didn't allow getting an SGI for each one
ATC (Score:1)
Dragon's Lair! (Score:1, Offtopic)
Totally different technologies (Score:5, Insightful)
These are completely different technologies. The first is an "actual" 3D display. The voxels have a true location in 3D space, for instance. People can view it from any angle with no equipment.
The second appears to be just a large screen. People wear shutter or polarized glasses to send different images to the left and right eyes.
While the second techology is great, especially for high-resolution display to a single person, it really is annoying when used with multiple people with different locations in space.
Since there is only one set (left and right) images on the flat screen, only one viewpoint can be chosen. If a group of people is sufficiently far from the screen, or sufficiently close together in the room, it's fine. But if you let the people wander around the room, you start getting perspective problems that really make collaborative viewing troublesome.
I have a feeling that we will be seeing voxel-based visualization like the one mentioned in this post more and more often. It's just more natural to use.
As someone who is in the field of high-resolution scientific visualization [llnl.gov] (that's me on the left), I certainly hope that technology will move in this direction.
There is also goggle-less flatscreen 3D! (Score:1)
Since there is only one set (left and right) images on the flat screen, only one viewpoint can be chosen.
I know that at least at Phillips Research Eindhoven they are working on flatscreen displays, which do not have the disadvantages you mention. They can be viewed by multiple people simultanously without a problem, and without goggles.
The technique is actually quite simple: take a high-res flatscreen, and stick a prisma sheet over it, so that depending on the angle you look at the screen, you will see another pixel.
At the lab they did have a prototype (1600x1200 as I recall) with 9 distinct views on it (thus reducing the apparrent resolution to 533x400). Really cool to see 3D on a flat(!)screen, without goggles.
Those 9 views were actually all horizonally distinct, otherwise the effect wouldn't be really well visible. So it doesn't work when you keep your head rotated, or move your head up&down.
(no link, sorry, they are not public about it. I saw it almost a year ago now, guess they must have improved since then)
Re:There is also goggle-less flatscreen 3D! (Score:2)
Obviously, a lot more than 9 distinct views would be nice, but even something as low as 25 would probably be useful.
If they engineer in the ability to move up and down, this would start being a serious contender.
Now I know how we'll use all that bandwidth... (Score:2)
Years ago I just didn't believe we'd really see volumetric 3D because the jump from (say) 640x480 to 640x480x480 just seemed too wide.
If we can get a stationary image 400 x 400 x 400 image for $20,000, it doesn't seem all that much of a stretch to 400 x 400 x 400 x 30 frames per second... or from there to 1600 x 1600 x 1600 x 30 frames per second... or from there to 1600 x 1600 x 1600 x 30 frames per second for $200.
And disk speeds and Internet speeds are coming along just fine...
Re:Now I know how we'll use all that bandwidth... (Score:1)
400x400x400 stationary is 64 million voxels, no refresh requred. Seems simple enough
400x400x400x30fps is 4.096 billion pixels per second. Already we've bypassed current COTS video (with mere 1-2 gigapixel refresh).
With Moore's Corollary squared holding true for video cards (refresh rate doubles approx every 9 months) It will take us 9 months to reach that, still not too far off, March 2003.
Now, for a stationary 1600x1600x1600 image we need 1.92 billion voxels, and for a 30fps image at that resolution we need 122.88 billion voxels per second.
This is a jump of 2^6 from our present 2 gigapixel limit. This means, assuming the corollary to continue to hold true, that we wont have this technology for a (minimum) period of 4 and a half years.
Thats still pretty damn fast considering how long it took to get a computer into everyones home, or the time it took to push us from 720x348 to 1920x1600, but it's still quite a ways off...
After that they still have to drop it down to $200; I for one will not hold my breath.
Re:Now I know how we'll use all that bandwidth... (Score:2)
Actually I expect it to take longer than you do.
But I have a good chance of seeing it, not merely within my lifetime, but even before my retirement.
Considering that I once doubted that I would ever even own my own computer, that's not too bad!
Re:Now I know how we'll use all that bandwidth... (Score:2)
Give it 20 years when you're getting your mandatory "Common Sense Implant". PC? You ain't seen nothing yet.
Projected 3D (Score:1)
By these I mean...think R2D2's projection unit, the panels used in Final Fantasy: Spirit within, or based on the trailer, the computer screen used by Tom Cruise in Minority Report.
I was thinking projectors exist, but it has to project on something. I was thinking maybe you could have something such as some kind of mist which is projected on. Is it possible when projecting from two locations to have a 3d pixel (I think the term is voxel but I'm not sure)?
Just curious...any additional links to these types of technology (and any terms referring to them to know what to look for) are greatly appreciated.
Re:Projected 3D (Score:1)
I have been batting around something similar in my mind for a long time...
Do you have any references or url's that we could peruse by chance?
Hardly hands on..? (Score:1)
But anyway, that is besides the point - I think technology like this is limited (with the exception of entertainment) unless you can get your hands into the 3D image and use it as if it were truly virtual reality. It is one thing to look at a 3D image of a brain tumour, but if you still have to use a 3D-mouse to manipulate it, then there is still an intuitive leap to be made when using applications written for the display - just like the intuitive leap between the 2D mouse and the 2D display. What this needs is to be combined with technology like Reachin [reachin.se] [troll warning] where you can actually see the data in the same space where you hands are working on it. Much more appropriate :)
Voxels and games... (Score:1)
I see probelms with the approach (Score:2, Informative)
Now, niftyneat as it looks, I see a few problems...
First and foremost, you're going to be stuck representing solid 1-color materials, wireframes and ghosts with this. You're also not going to be able to make objects appear to be lit correctly. Why? Because the display has no idea what angle you're viewing it from. I'll explain.
Hold your thumb in front of the screen. It's blocking some part of the display, right? Move your head back and forth a little, and it will block different parts. Raise your head up a little or drop it down and it will block different parts again. The thing is, the display has no idea where you're looking from, so every part needs to be visible at all times. It can't clip out bits that are behind other things like a traditional 2D display. The result is that if you show a screen full of text, and draw a thumb in front, you still see the text through the thumb. Both will appear to act like ghosts.
Now, consider drawing a Coke can with a flashlight shining on the side. Again, it has no idea which side you're viewing from, so it's got to draw all sides of the can. The thing is, as you move about it, the logo on the front of the can shouldn't be visible when looking at the back of the can. Similarly, when you look at the side opposite the flashlight, it should be all dark. But since the display uses volumetric texels, it has no idea about the facing of each texel. Every texel's going to be drawn, so a you'd see the backward logo when looking from the back, and you'd see what boils down to a really confusing lighting situation when viewing from the non-flashlight side. It's like ghosts or colored X-Rays.
If you're still with me, that covers the reason for no shadows or non-uniform dull, not-too-shiny surfaces.
Next problem is - it's gonna be SLOW! Sad, but true! If it were a 3D bitmap representing equal units of a cube, that would be one thing. Unfortunately, it represents slices of a bitmap rotating through space.
Now, let me say this: Computers hate round things. Arcs, swooshes, ribbons, none of these are much fun for a computer to draw (comparatively speaking), much less, to render into.
Normally, polygon raster operations boil down to setting up a bunch of lines, one per scanline, and for each, figuring out how you progress across the line in measured, discrete steps. "I'm starting here in the texture, and I'll be there in the texture. I need to get there in 32 screen pixels, and I advance n units through regular steps of screen, texel and 1/z space." This tells the computer do the expensive calculation once, and just do 32 iterative steps to render the 32 pixels on that scanline. Any modern 3D engine is actually optimized to do the expensive stuff 1-2 times, creating the per-scanline numbers iteratively as well.
The only places where this approach doesn't work are where you're clipping against the edge of the display area. Clipped triangles are traditionally an order of magnitude more expensive to render than non-clipped ones. So much so, that terrible tricks are used to avoid them or reduce them to categories of special cases that can be tackled to attempt to avoid reverting to a true clip. For example, many display systems actually create waste RAM in a border around the screen. If a triangle doesn't penetrate the waste area, the rasterizer will go ahead and draw (or pretend to draw) the dummy pixels. It's only in the case where triangles are partly on screen, but go even beyond the dummy area that the hideously expensive render functions are called. Drawing millions of pixels per second that you know the user will never see? That sure points to a problem!
Enter the circular slice-based display space.
Here, for every single pixel, you've got to find which bits of a render go through. Essentially, you have to clip against the front and back of every single triangle you render as you calculate each slice. You're taking the worst hit on every single triangle!
What's even worse is that a single 'frame' (half rotation, assuming the rotating display plane is visible from front and back) consists of just shy of 200 renders. This means you're taking that 1% worst case scenario and repeating it 100% of the time, and repeating it about 200 times per frame. And because you're dealing with an arc for the rotational advancement (remember, computers like even, linear, discrete steps), you're dealing with curved surfaces instead of little cubes and the planes of a view frustum. Essentially, you're looking for the union of an arbitrary material and a stuffed piece of macaroni instead of merely finding the portion that fits within a little box. This makes the checks for pixel penetration several orders of magnitude more expensive and makes it even more expensive to attempt to reuse data from one slice to the next.
Hee. Plus the display is connected to your PC via SCSI2W, which is also a not-too-minor detail. You've got over 100 million pixels to send across per 'frame'. Even if they're just 1-byte pixels (256 color), and partial updates, that's asking a lot of a dual-channel 20MHz(?) bus.
Mind, we're still discovering things today which would have sped up rendering on our Commodore 64s. The computational cost will come down over time as more ways are developed for rendering in non-uniform/curved space, and as different spatial representation methods are explored. This is a nifty advance, certainly a step closers than the silly lenticular lens based 3D systems and the layered LCD-over-CRT approaches.
Still. Think of a ghosty AutoCAD on a 286. Look, but don't touch. We've still got a ways to go before 3D games and movies become a reality.
But don't get me wrong: It's a neat advancement, and it gives me hope. If I could borrow one, I think I'd make a noisy whirring ghost town snow globe. The shape just begs for it. And I'd love to get cracking on trying to find efficient algorithms for the unusual render space. *sigh.* $60k though. Maybe eBay can help me out on this one in another 20 years.
Re:I see probelms with the approach (Score:2)
I agree with the inability to draw non-ghosty objects, since it can't render an opaque voxel, and it can't not render voxels.
But the performance part wouldn't be that bad. At least, not quite as bad as you suggest. I think. I'm guessing a lot.
So, let's say you've rendered the scene as a 3d bitmap. If you can solve the storage and bandwidth issues, this is actually nice in a couple ways. Like you don't have to step through 1/z space, since your x and y aren't perspective-adjusted. You're rendering directly into view coordinates. The equations for the gradients are simpler. Okay, that isn't going to help too much, especially considering that you have to render every voxel.
But one thing that should help is that you don't have to clip to a sphere, or cylinder, or anything abnormal like that. Use a cube. Render a nice 3-d cubic bitmap. Then, for each circle slice, you just have determine which voxels in the bitmap are covered in that slice. That's nice and linear. Your 2-D bounding box would be different on each slice as it tracked the vertical faces of the cube, but those values could be pre-calculated as they are static.
So while it would certainly be bad, because you can't do any of the nice tricks to reduce the -amount- of rendering, the rendering itself would basically be comprised of the same-ol-stuff, just triangles clipped against 2-d surfaces. Bad, but a problem solveable merely through bandwidth.
how useful is 464^3? (Score:2)
topic title change (Score:2)
Science: 3D Visualization Moves In Circles
;P
Flat screens -vs- this thing (Score:2)
You can't change perspective on the flatscreen.. not like a hologram. What you see is what you see, until the software changes it.
You can't peek around something or shift your point of view by just moving.
This globe thing, you can look at, walk around, see things as if they were a solid image. Just like a hologram (but with a 360 degree fov of course)
Man, this is supposed to be cutting edge? (Score:1)
SWEET!! (Score:2)
Re:SWEET!! (Score:1)
Want one? $27500 (Score:1)
Points to be considered... (no pun intended there) (Score:1)
1. This is NOT a two-dimensional display. It creates images that are as "real" as any other object, although made of light (think "Death Star attack briefing" here).
2. This means, that you can perceive the dimensionality (is that a word?) of the displayed objects even if you are blind in one eye, are color-blind, have only one leg, don't like cats, etc.
3. No glasses or other viewing "helpers" are needed. Watch the screen, Luke...
4. This object works much like a traditional big-screen projecttion television...only the screen is rotating around the centerpoint of the globe at a considerable speed. The speed is fast enough that your eyes can't really see the screen, but can see the projected images.
5. This is NOT a new concept. The new thing about it is that it is the LARGEST commercially one available to date (large as far as resolution). I've personally seen versions of this display that are about 3 feet (~1 meter) around that use a matrix of LEDs on a rotating board. Same effect, really....older technology....not as good as this one. Here's one of the google returns for this design [vdivde-it.de]. Another design that has had a good amount of research is to create images using lasers shot onto a spinning helix or screw shape [google.com].
6. So far, resolution is the biggest hurdle. Next is refresh rate (it is, quite obviously tied to the rotation speed and to the speed and falloff of the light emmiting device (LED, Laser, phosphor, etc).
6. It really is amazing to see it in person. Don't knock it. It will be in more wide-spread use in the near future and will be a huge leap forward for certain tasks. Most people will never see one (except maybe in a science museum) and will never need to use one, though.
7. Using a wireless network, now you too can forward a holographic message from Obi-wan to the Senate! heh.
Resolution (Score:2)
Re:Gotta get that September 11th Hysteria in there (Score:1)
Re:Gotta get that September 11th Hysteria in there (Score:1)
Re:Gotta get that September 11th Hysteria in there (Score:1)
furthermore there are places in the world that have ATC systems and ARE below sea level
Re:old technology (Score:1)
The biggest problem with this isn't making it work, since it has before, it's making software that doesn't suck! Can anyone imagine using Windows3D, and getting the blue screen of death from all angles? I suppose Micro$oft would be happy because all of our software would instantly become (more?) useless.
Re:Nuked (Score:1)