High-Speed Video Free With High-Def Photography 75
bugzappy notes a development out of the University of Oxford, where scientists have developed a technology capable of capturing a high-resolution still image alongside very high-speed video. The researchers started out trying to capture images of biological processes, such as the behavior of heart tissue under various circumstances. They combined off-the-shelf technologies found in standard cameras and digital movie projectors. What's new is that the picture and the video are captured at the same time on the same sensor. This is done by allowing the camera's pixels to act as if they were part of tens, or even hundreds, of individual cameras taking pictures in rapid succession during a single normal exposure. The trick is that the pattern of pixel exposures keeps the high-resolution content of the overall image, which can then be used as-is, to form a regular high-res picture, or be decoded into a high-speed movie. The research is detailed in the journal Nature Methods (abstract only without subscription).
Re:I've actually thought about this... (Score:2, Informative)
It depends. In good lighting you don't need to register all photons. However in a dark room or for watching night sky each photon counts. Here is an informative article: http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html
Human eye can actually register flash of about 90 photons (10% of them will reach the retina, so about 9 photons is enough to activate receptors). The sensitivity also depends on the wavelength.
Re:interlacing (Score:5, Informative)
Visual effects technology company 'The Foundry' have done quite a lot of research into this area already.
Their Furnace F_SmartZoom [thefoundry.co.uk] tool uses motion estimation techniques to analyse successive film frames to derive single frames of higher resolution than any one of the moving frames. And their Rolling Shutter [thefoundry.co.uk] tool uses local motion estimation algorthithms to analyze the staggered frames output by CMOS cameras to reconstruct them into complete un-staggered frames.
It's very interesting that the scientists in Oxford are exploiting this side effect of CMOS cameras by combining both these technologies to derive high resolution, un-blurred frames from multiple CMOS images.
As a side-note, District 9 was shot on the Red camera (a CMOS camera that exhibits this rolling shutter efffect), and a lot of Image Engine's post-production work that film required this sort of analysis so that staggered frames could be reconstructed to enable 3-D motion tracking for the insertion of CG into live action plates.
Re:I've actually thought about this... (Score:4, Informative)
Re:I've actually thought about this... (Score:3, Informative)
A high resolution optical sensor delivers a shitload of data - 20 and more megabytes for every frame. The processing of the data from the Bayer matrix (we won't take the Foveon into account for the sake of the argument) and resizing also takes time. You need at least 60 fps to get rid of lag while moving. Have fun at processing 1.2 terabytes per second.
Re:interlacing (Score:3, Informative)
The idea is that the light sensitive components have a minimum response time that is too large to capture high frame-rate digital data without tricks.
It's not actually a minimum response time issue, at least not from a CCD sensor point of view (as opposed to CMOS sensors you tend to see in consumer-level digital video and photography products).
"Traditional" high-speed photography with CCD sensors usually works by lighting the scene with high-intensity light sources so that the sensors are able to gather enough photons within the short exposure times to be "useful." Have a look around GooTube for things like the "SawStop demo" on the Discovery Time Wrap program for a good example of this.
If you look at a single pixel element on a CCD sensor it's essentially a photon well - it receives photons from the environment and converts them to an electric charge. Assuming the electronics reading charges out of the CCD sensor are good enough, a single photon striking a pixel element would be detectable, thus it's not really a pixel-related minimum response time issue.
The conventional electronics used in the "read out " process of a CCD sensor essentially do the following: they enable a "row" of pixel elements and clock the electric charges across the "columns" by using something akin to a bucket bridge network. The charge from the column getting clocked off the side of the sensor is read by an ADC (analogue-digital convertor) and stored in a digital buffer (RAM) before being sent to the host device. Each row is "clocked out" and read in this fashion, then the whole CCD sensor is shorted to reset any residual charges ready for the next exposure. Any response time issues are in the clocking out process, since the weakest link in the chain will be the time needed by the ADC to capture and convert a single charge.
The proposed technique changes the read out process in several ways, vastly increasing the complexity of the CCD sensor's bucket bridge network and reset electronics in the process. Say, for example, the sensor is setup as an array of 2x2 elements (the article proposes 4x4 elements). The read out process needs to read out pixels in four phases: even columns on even rows, odd columns on even rows, even columns on odd rows, odd rows on odd rows. That sounds complex already, right, but it's worse: because the sensor will essentially be exposed continuously you also need to reset the charges in those groups individually otherwise you'll get residual charge build-up that skews the data over time. If you don't all pixel elements will eventually read as full charges.
Electronics complexity issues aside, I'm wondering how useful this technique will be for high-speed scientific research. When looking at the resultant high-speed video each frame will be offset slightly in both the horizontal and vertical directions (1/2 pixel in a 2x2 network, 1/4 pixel in a 4x4 network). To some degree this will able to be corrected using sub-pixel blending, but this will introduce errors into the frames thus reducing their utility. Nonetheless, it sounds like a very interesting technique.