Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

High-Speed Video Free With High-Def Photography 75

bugzappy notes a development out of the University of Oxford, where scientists have developed a technology capable of capturing a high-resolution still image alongside very high-speed video. The researchers started out trying to capture images of biological processes, such as the behavior of heart tissue under various circumstances. They combined off-the-shelf technologies found in standard cameras and digital movie projectors. What's new is that the picture and the video are captured at the same time on the same sensor. This is done by allowing the camera's pixels to act as if they were part of tens, or even hundreds, of individual cameras taking pictures in rapid succession during a single normal exposure. The trick is that the pattern of pixel exposures keeps the high-resolution content of the overall image, which can then be used as-is, to form a regular high-res picture, or be decoded into a high-speed movie. The research is detailed in the journal Nature Methods (abstract only without subscription).
This discussion has been archived. No new comments can be posted.

High-Speed Video Free With High-Def Photography

Comments Filter:
  • interlacing (Score:2, Interesting)

    by Anonymous Coward on Wednesday February 17, 2010 @03:43AM (#31166182)

    Sounds like they have a high resolution image sensor but the timing of the data samples from certain groups of pixels is staggered. Sort of like how one frame of interlaced NTSC DVD video can represent a single "high resolution" 720x480 image, or a series of two 720x240 images 1/60th second apart.

  • by pushing-robot ( 1037830 ) on Wednesday February 17, 2010 @04:18AM (#31166318)

    ...and how eventually cameras will not have a "shutter" as we know it but will simply keep track of how each pixel was illuminated at each moment in time. Of course, shutterless sensors are already in widespread use; we call them "eyes", and they have the same benefits that TFA describes: Your brain can observe low-detail fast-moving objects and high-detail static objects at the same time without having to reconfigure anything. Consequentially, shutterless cameras would have the side benefit of better approximating biological vision.

    The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor, so that the zoom, exposure time, and focus can be changed in post-processing (as well as a lot of other cool stuff).

  • by Anonymous Coward on Wednesday February 17, 2010 @04:53AM (#31166494)

    There have already been several adaptive sensor/camera designs and prototypes proposed that adjust the integration (shutter) time independently for each pixel on the sensor so that no pixel is saturated (maxed out). Consequently knowing the per-pixel integration time and sensor value allows you to reconstruct a high dynamic range image. This design seems to be the application of the idea of binning (which has been used for noise reduction and improved dynamic range when coupled with a spatially varying attenuation filter) but instead using it to integrate over staggered intervals.

  • Re:interlacing (Score:3, Interesting)

    by Rockoon ( 1252108 ) on Wednesday February 17, 2010 @05:52AM (#31166820)
    I think you've missed the point.

    Use the high-frame-rate camera to take a high-frame-rate video, or use it to take a high resolution picture, but you cant take a high-frame-rate high-resolution video.

    The idea is that the light sensitive components have a minimum response time that is too large to capture high frame-rate digital data without tricks. So engineers being what they are use seperate groups of them with staggered capture times in order to achieve high frame-rates. In the simplest case there would only be two groups of senors, probably would be called odd and even, which would allow double the frame-rate of that minimum response time.

    What these blokes have noted is that the groups of sensors which capture a single frame are stippled across the capture device, so if the capture times were not staggered then the effective resolution is higher. Essentially they are un-staggering the capture times post-capture in order to achieve that high resolution, meaning that you cannot have both at the time time.

    The most they can save appears to be 50%, the cost of a regular high resolution capture device which they didnt get with the their high-frame-rate device purchase.
  • by N Monkey ( 313423 ) on Wednesday February 17, 2010 @06:11AM (#31166928)

    There are already shutterless cameras. They're called video cameras...

    Some stills cameras, e.g. on phones, are shutterless as well, but often have some interesting artefacts [flickr.com].

    In this case it is probably due to the high level of correlation between pixel position and "shutter" time. I'm guessing that, in the paper, (judging only by the abstract) they are using a pseudo-random pattern for the pixel sampling which would trade these weird effects for 'noise' which would be less obvious.

  • by gillbates ( 106458 ) on Wednesday February 17, 2010 @11:54AM (#31170166) Homepage Journal

    The overwhelming majority of digital cameras do not have a shutter. You do realize that clicking sound comes not from a shutter, but from a small speaker, right?

    I'm honestly sorry I didn't patent this technique back in 2005 when I was working with digital image sensors, but suffice to say, it's been known about and used in industry for quite some time. Engineers have always known there was a tradeoff between the image resolution and frame rate, and this appears a rather obvious compromise. An image sensor chip has a limited bandwidth for reading out pixels, so naturally the framerate is a factor of the image pixel count.

    Most image sensors can be reconfigured rather quickly, perhaps even between frames. This technique is hardly worth a patent, as it's obvious to anyone who's ever had to make a tradeoff between frame rate and light sensitivity, or frame rate and resolution. For video, there's the standard D1 resolution of 720 by 480. For stills, the whole resolution of the sensor is used. So obvious that it is hard to consider it novel enough to patent.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...