High-Speed Video Free With High-Def Photography 75
bugzappy notes a development out of the University of Oxford, where scientists have developed a technology capable of capturing a high-resolution still image alongside very high-speed video. The researchers started out trying to capture images of biological processes, such as the behavior of heart tissue under various circumstances. They combined off-the-shelf technologies found in standard cameras and digital movie projectors. What's new is that the picture and the video are captured at the same time on the same sensor. This is done by allowing the camera's pixels to act as if they were part of tens, or even hundreds, of individual cameras taking pictures in rapid succession during a single normal exposure. The trick is that the pattern of pixel exposures keeps the high-resolution content of the overall image, which can then be used as-is, to form a regular high-res picture, or be decoded into a high-speed movie. The research is detailed in the journal Nature Methods (abstract only without subscription).
How long (Score:2, Funny)
Re: (Score:2)
Re: (Score:2)
It has already happened, kind of. Esquire [gizmodo.com] has done covershoots on a video camera, then selected individual frames to pull out for photos and the cover.
Of course, what the article is talking about is changing how high-speed photography happens in order to get high-speed video on the same chip... essentially, dividing each CCD into 16 subsequent regions, and firing off those sequentially to form 16 frames of video or 1 image. There is some image degradation inherent in what they're talking about doing, of c
I read the title as "High-Def Pornography"... (Score:4, Funny)
I think it's past my bedtime.
Re: (Score:1)
Q: How come there's no truly clever or truly witty humor on Slashdot?
A: Because shit like this keeps getting modded up.
O: Slashdot seems to have an open door policy for @$$#0l3$, though. ;)
interlacing (Score:2, Interesting)
Sounds like they have a high resolution image sensor but the timing of the data samples from certain groups of pixels is staggered. Sort of like how one frame of interlaced NTSC DVD video can represent a single "high resolution" 720x480 image, or a series of two 720x240 images 1/60th second apart.
Re:interlacing (Score:4, Insightful)
Yea that's the first thing I thought as well; the principle is similar to video interlacing from back in the day, except that this is more sophisticated, and could conceivably be used to capture extremely high definition, extremely high framerate footage.
If you apply this technology to high grade 50mpix Hasselblad sensors, you could conceivably acheive frame rates of thousands of frames per second in 2k or even 4k resolution using gear that costs under $100k. Currently, that sort of photography is limited to national science bodies and multi-million dollar budgets. Being able to do that sort of thing for under 6 figures would open up HUGE research possibilities for university science labs and other relatively fund-poor institutions.
Re:interlacing? (Score:2)
Yea that's the first thing I thought as well; the principle is similar to video interlacing from back in the day, except that this is more sophisticated, and could conceivably be used to capture extremely high definition, extremely high framerate footage.
I could only read the abstract, but this just seems to be the reverse of Frameless Rendering: Double Buffering Considered Harmful [nus.edu.sg] which relates to rendering 3D graphics in scattered sets of pixels.
Re: (Score:3, Interesting)
Use the high-frame-rate camera to take a high-frame-rate video, or use it to take a high resolution picture, but you cant take a high-frame-rate high-resolution video.
The idea is that the light sensitive components have a minimum response time that is too large to capture high frame-rate digital data without tricks. So engineers being what they are use seperate groups of them with staggered capture times in order to achieve high frame-rates. In the simplest case there wou
Re:interlacing (Score:5, Informative)
Visual effects technology company 'The Foundry' have done quite a lot of research into this area already.
Their Furnace F_SmartZoom [thefoundry.co.uk] tool uses motion estimation techniques to analyse successive film frames to derive single frames of higher resolution than any one of the moving frames. And their Rolling Shutter [thefoundry.co.uk] tool uses local motion estimation algorthithms to analyze the staggered frames output by CMOS cameras to reconstruct them into complete un-staggered frames.
It's very interesting that the scientists in Oxford are exploiting this side effect of CMOS cameras by combining both these technologies to derive high resolution, un-blurred frames from multiple CMOS images.
As a side-note, District 9 was shot on the Red camera (a CMOS camera that exhibits this rolling shutter efffect), and a lot of Image Engine's post-production work that film required this sort of analysis so that staggered frames could be reconstructed to enable 3-D motion tracking for the insertion of CG into live action plates.
Re: (Score:1)
Use the high-frame-rate camera to take a high-frame-rate video, or use it to take a high resolution picture, but you cant take a high-frame-rate high-resolution video.
So it's the Heisenberg Photography Principle?
Re: (Score:1)
It's the Lincoln/"fooling the people" effect.
Re: (Score:3, Informative)
The idea is that the light sensitive components have a minimum response time that is too large to capture high frame-rate digital data without tricks.
It's not actually a minimum response time issue, at least not from a CCD sensor point of view (as opposed to CMOS sensors you tend to see in consumer-level digital video and photography products).
"Traditional" high-speed photography with CCD sensors usually works by lighting the scene with high-intensity light sources so that the sensors are able to gather enough photons within the short exposure times to be "useful." Have a look around GooTube for things like the "SawStop demo" on the Discovery Time Wrap p
Re: (Score:3)
Hasselblad sensors
There's seriously something called a Hasselblad sensor? That is fucking awesome. That sounds like something off of Babylon 5. "Incoming enemy fighters on the Hasselblad sensors!"
Representative sample (Score:4, Funny)
Re: (Score:2, Funny)
As I read this, there are three comments. Two are about porn. Slashdot in a nutshell.
Actually, only one of those three was about porn. The other two (and this one) are just offtopic. So that makes Slashdot 25% horny, 25% pedantic, and 75%-100% offtopic.
Re: (Score:2)
Re: (Score:1)
I've actually thought about this... (Score:5, Interesting)
...and how eventually cameras will not have a "shutter" as we know it but will simply keep track of how each pixel was illuminated at each moment in time. Of course, shutterless sensors are already in widespread use; we call them "eyes", and they have the same benefits that TFA describes: Your brain can observe low-detail fast-moving objects and high-detail static objects at the same time without having to reconfigure anything. Consequentially, shutterless cameras would have the side benefit of better approximating biological vision.
The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor, so that the zoom, exposure time, and focus can be changed in post-processing (as well as a lot of other cool stuff).
Re: (Score:2)
Some stills cameras do too, but.... (Score:3, Interesting)
There are already shutterless cameras. They're called video cameras...
Some stills cameras, e.g. on phones, are shutterless as well, but often have some interesting artefacts [flickr.com].
In this case it is probably due to the high level of correlation between pixel position and "shutter" time. I'm guessing that, in the paper, (judging only by the abstract) they are using a pseudo-random pattern for the pixel sampling which would trade these weird effects for 'noise' which would be less obvious.
Re: (Score:2)
Some stills cameras, e.g. on phones, are shutterless as well, but often have some interesting artefacts [flickr.com].
Ha, sweet; but that's not the most illustrative image from that set. I prefer This one [flickr.com].. you know, since hardware folk might mistake your linked image with some new, weed-whacker style floppy propeller system. :3
In the photo I linked, I love how the "pseudoblades" also have well-defined shadows xD
Re: (Score:2)
you know, since hardware folk might mistake your linked image with some new, weed-whacker style floppy propeller system.
Well I just looked at your image and I think I'm looking at some matter-displacement, levitation/suspension technology that involves volume-altering materials... and possibly LSD.
Re: (Score:1)
A good descriptive article here - htt [dvxuser.com]
Re: (Score:2)
I'm never flying again.
Re: (Score:2)
and how eventually cameras will not have a "shutter" as we know it but will simply keep track of how each pixel was illuminated at each moment in time.
I believe most non-SLR digital cameras already do without a physical shutter. I've always thought that for these cameras it might be useful to break up a typical exposure into multiple shorter exposures and just stack the resulting images using the differences between frames to detect noise and blur due to camera shake etc.
Re:I've actually thought about this... (Score:4, Funny)
And what about the lens? If the sensors are omnidirectional and can simply keep reporting their state at a high frequency, the "lens" (its optical purpose) can be done in software. You just need a high density of sensors and the ability to process the information fast enough.
Obviously, the individual sensors can't be truly omnidirectional, but rather their visibility angle would depend on the geometry of the surface they're placed on -- which could be a hemisphere, or even an almost complete sphere. As you mentioned, the angle of light would still be relevant, but this would be done on an individual sensor basis -- rather than one lens orchestrating the entire image.
There, we solved it. Engineers, get to work!
Re: (Score:2)
Re: (Score:2)
Sorry but no. The resolution of an eyepeace is higher and the reaction speed is the speed of light so there is no lag.
Re: (Score:2)
Resolution: If you got a 1080p image into that tiny projector, would that be enough? How about a 4k image? At what point would the resolution be high enough?
Both of these characteristics will improve with time -- it won't be long before the clunky reflex mechanism is out.
Re: (Score:3, Informative)
A high resolution optical sensor delivers a shitload of data - 20 and more megabytes for every frame. The processing of the data from the Bayer matrix (we won't take the Foveon into account for the sake of the argument) and resizing also takes time. You need at least 60 fps to get rid of lag while moving. Have fun at processing 1.2 terabytes per second.
Re: (Score:2)
Gigabytes, sorry.
Re: (Score:2)
> ...the "lens" (its optical purpose) can be done in software.
No it can't. The pixels have no information about the direction from which each photon arrived. Without a lense each of your pixels will receive photons from every point in the scene with no way to sort them out.
Re: (Score:1)
Re: (Score:2)
No it can't. The pixels have no information about the direction from which each photon arrived. Without a lense each of your pixels will receive photons from every point in the scene with no way to sort them out.
...and 3 lines later in the post I said: "the angle of light would still be relevant, but this would be done on an individual sensor basis -- rather than one lens orchestrating the entire image".
A bit like fly's eye -- every sensor would only report on a single angle. Place an array of these sensors on a hemisphere (or an only slightly convex/concave surface), and make it dense enough, and you *could* do the rest in software.
Re: (Score:2)
The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor...
I'm usually one of the last to predict the limits of future technology, but I can not imagine how this could ever be any more practical/useful than capturing/tracking every hundredth, or thousandth, or even millionth photon. I think maybe there's a lack of perspective as to the insane frequency at which photons contact a given surface area.
Re: (Score:2, Informative)
It depends. In good lighting you don't need to register all photons. However in a dark room or for watching night sky each photon counts. Here is an informative article: http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html
Human eye can actually register flash of about 90 photons (10% of them will reach the retina, so about 9 photons is enough to activate receptors). The sensitivity also depends on the wavelength.
Re: (Score:2)
Of course, by giving any example found in nature you're setting the bar pretty high. If we wanted to replicate the functionality of the simplest plankton (like a Picoplankton), we'd probably need to construc
Re:I've actually thought about this... (Score:4, Informative)
Re: (Score:1, Interesting)
There have already been several adaptive sensor/camera designs and prototypes proposed that adjust the integration (shutter) time independently for each pixel on the sensor so that no pixel is saturated (maxed out). Consequently knowing the per-pixel integration time and sensor value allows you to reconstruct a high dynamic range image. This design seems to be the application of the idea of binning (which has been used for noise reduction and improved dynamic range when coupled with a spatially varying atte
Re: (Score:1)
Re: (Score:2)
The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor
That's already been done to some extent. Put a grid of small lenses in front of your sensor and you can trace each ray back through the main lens and reconstruct a lower resolution image from any point of view, or focal length, that would have been viewable from any point within the volume of the camera.
Re: (Score:3, Interesting)
The overwhelming majority of digital cameras do not have a shutter. You do realize that clicking sound comes not from a shutter, but from a small speaker, right?
I'm honestly sorry I didn't patent this technique back in 2005 when I was working with digital image sensors, but suffice to say, it's been known about and used in industry for quite some time. Engineers have always known there was a tradeoff between the image resolution and frame rate, and this appears a rather obvious compromise. An image se
Re: (Score:2)
I guess I should have said "frameless" cameras (it was two in the morning...).
Many cameras today don't have a physical shutter, but they still work (to my knowledge) by exposing the sensor to light for a significant fraction of a second, then reading the cumulative charge on the sensor elements and "flushing" the elements, and then starting over again.
A "frameless" camera is an oversampled camera; instead of exposing the sensor for ten milliseconds or more at a time, you would take readings more than once p
Re: (Score:1)
No, the ultimate sensor would record the quantum wavefunctions of the "photons", rather than the collapse of them. Then you could.. well I'm not sure what that would allow, but it's clearly capturing more information.
Re: (Score:2)
That'd be great for the purposes of mimicking biological eyes, i.e. a sensor for a general-purpose robot to keep data rates down. However, it wouldn't be good for photography or cinematography as an art at all. The idea is to capture both the minute details and the fast motion at the same time to create an experience that's a little different from reality. It's like HDR or UV photography. Our eyes don't have that kind of capability, yet we still would want to take such pictures.
Sounds familiar (Score:2)
This sounds like something I remember a flatmate talking about previously; there is a free software program that did this. You took a few low-resolution pictures, ran them through the program, and got out a high resolution image. The same can be done with a video (as the low-resolution pictures).
I can't recall the name of the program, will have a hunt for it.
Re: (Score:1)
Re: (Score:2)
> You took a few low-resolution pictures, ran them through the program, and
> got out a high resolution image.
This is not the same thing at all.
Re: Sounds familiar (Score:1)
Sounds like this development would greatly improve a 2008 Casio camera a friend told me about a couple weeks ago. 6MP, with full res shots going into the buffer @ 60fps before you fully press the shutter button. Up to 1200 fps (tiny) video.
Hate to sound like a shill, but "high-resolution still image alongside very high-speed video" describes this pretty well, depending on your definition of "high" at least.
http://www.exilim.com/intl/ex_f1/features1.html [exilim.com]
http://www.casio.com/products/Cameras/EXILIM_High-Spe [casio.com]
Free as in Free Links Please (OH MY A FREE LINK!) (Score:1)
Is Slashdot not about open access? I read enough complaints from scientist bloggers about having to be on-campus in-the-office or-else-pay for articles they have a subscription to.
A little research back to the researchers could doubtless second-source the information; I regularly see the authors post the articles themselves or at least an informative link: http://www.isis-innovation.com/licensing/3268.html [isis-innovation.com]
Firmware update (Score:2)