Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

High-Speed Video Free With High-Def Photography 75

bugzappy notes a development out of the University of Oxford, where scientists have developed a technology capable of capturing a high-resolution still image alongside very high-speed video. The researchers started out trying to capture images of biological processes, such as the behavior of heart tissue under various circumstances. They combined off-the-shelf technologies found in standard cameras and digital movie projectors. What's new is that the picture and the video are captured at the same time on the same sensor. This is done by allowing the camera's pixels to act as if they were part of tens, or even hundreds, of individual cameras taking pictures in rapid succession during a single normal exposure. The trick is that the pattern of pixel exposures keeps the high-resolution content of the overall image, which can then be used as-is, to form a regular high-res picture, or be decoded into a high-speed movie. The research is detailed in the journal Nature Methods (abstract only without subscription).
This discussion has been archived. No new comments can be posted.

High-Speed Video Free With High-Def Photography

Comments Filter:
  • How long (Score:2, Funny)

    by Anonymous Coward
    How long before it is used for porn?
    • Just hit the "Pause" button.
    • by cgenman ( 325138 )

      It has already happened, kind of. Esquire [gizmodo.com] has done covershoots on a video camera, then selected individual frames to pull out for photos and the cover.

      Of course, what the article is talking about is changing how high-speed photography happens in order to get high-speed video on the same chip... essentially, dividing each CCD into 16 subsequent regions, and firing off those sequentially to form 16 frames of video or 1 image. There is some image degradation inherent in what they're talking about doing, of c

  • by HouseOfMisterE ( 659953 ) on Wednesday February 17, 2010 @02:42AM (#31166172)

    I think it's past my bedtime.

  • interlacing (Score:2, Interesting)

    by Anonymous Coward

    Sounds like they have a high resolution image sensor but the timing of the data samples from certain groups of pixels is staggered. Sort of like how one frame of interlaced NTSC DVD video can represent a single "high resolution" 720x480 image, or a series of two 720x240 images 1/60th second apart.

    • Re:interlacing (Score:4, Insightful)

      by MrNaz ( 730548 ) * on Wednesday February 17, 2010 @03:39AM (#31166424) Homepage

      Yea that's the first thing I thought as well; the principle is similar to video interlacing from back in the day, except that this is more sophisticated, and could conceivably be used to capture extremely high definition, extremely high framerate footage.

      If you apply this technology to high grade 50mpix Hasselblad sensors, you could conceivably acheive frame rates of thousands of frames per second in 2k or even 4k resolution using gear that costs under $100k. Currently, that sort of photography is limited to national science bodies and multi-million dollar budgets. Being able to do that sort of thing for under 6 figures would open up HUGE research possibilities for university science labs and other relatively fund-poor institutions.

      • Yea that's the first thing I thought as well; the principle is similar to video interlacing from back in the day, except that this is more sophisticated, and could conceivably be used to capture extremely high definition, extremely high framerate footage.

        I could only read the abstract, but this just seems to be the reverse of Frameless Rendering: Double Buffering Considered Harmful [nus.edu.sg] which relates to rendering 3D graphics in scattered sets of pixels.

      • Re: (Score:3, Interesting)

        by Rockoon ( 1252108 )
        I think you've missed the point.

        Use the high-frame-rate camera to take a high-frame-rate video, or use it to take a high resolution picture, but you cant take a high-frame-rate high-resolution video.

        The idea is that the light sensitive components have a minimum response time that is too large to capture high frame-rate digital data without tricks. So engineers being what they are use seperate groups of them with staggered capture times in order to achieve high frame-rates. In the simplest case there wou
        • by dosowski ( 15924 )

          Use the high-frame-rate camera to take a high-frame-rate video, or use it to take a high resolution picture, but you cant take a high-frame-rate high-resolution video.

          So it's the Heisenberg Photography Principle?

        • Re: (Score:3, Informative)

          by scdeimos ( 632778 )

          The idea is that the light sensitive components have a minimum response time that is too large to capture high frame-rate digital data without tricks.

          It's not actually a minimum response time issue, at least not from a CCD sensor point of view (as opposed to CMOS sensors you tend to see in consumer-level digital video and photography products).

          "Traditional" high-speed photography with CCD sensors usually works by lighting the scene with high-intensity light sources so that the sensors are able to gather enough photons within the short exposure times to be "useful." Have a look around GooTube for things like the "SawStop demo" on the Discovery Time Wrap p

      • by Ihmhi ( 1206036 )

        Hasselblad sensors

        There's seriously something called a Hasselblad sensor? That is fucking awesome. That sounds like something off of Babylon 5. "Incoming enemy fighters on the Hasselblad sensors!"

  • by LordLucless ( 582312 ) on Wednesday February 17, 2010 @02:49AM (#31166204)
    As I read this, there are three comments. Two are about porn. Slashdot in a nutshell.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      As I read this, there are three comments. Two are about porn. Slashdot in a nutshell.

      Actually, only one of those three was about porn. The other two (and this one) are just offtopic. So that makes Slashdot 25% horny, 25% pedantic, and 75%-100% offtopic.

    • And then your post complains about Slashdot. Add that to the nutshell.
    • As I read this, there are three comments. One is about porn, and another is about Slashdot itself. Slashdot in a nutshell.
  • by pushing-robot ( 1037830 ) on Wednesday February 17, 2010 @03:18AM (#31166318)

    ...and how eventually cameras will not have a "shutter" as we know it but will simply keep track of how each pixel was illuminated at each moment in time. Of course, shutterless sensors are already in widespread use; we call them "eyes", and they have the same benefits that TFA describes: Your brain can observe low-detail fast-moving objects and high-detail static objects at the same time without having to reconfigure anything. Consequentially, shutterless cameras would have the side benefit of better approximating biological vision.

    The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor, so that the zoom, exposure time, and focus can be changed in post-processing (as well as a lot of other cool stuff).

    • by nmos ( 25822 )

      and how eventually cameras will not have a "shutter" as we know it but will simply keep track of how each pixel was illuminated at each moment in time.

      I believe most non-SLR digital cameras already do without a physical shutter. I've always thought that for these cameras it might be useful to break up a typical exposure into multiple shorter exposures and just stack the resulting images using the differences between frames to detect noise and blur due to camera shake etc.

    • by derGoldstein ( 1494129 ) on Wednesday February 17, 2010 @03:44AM (#31166458) Homepage
      This entire field can easily be extrapolated. First, the shutter is a mechanical components that isn't required -- every portable computer has video camera that can take still images. The reason we still have shutters in high-end cameras is because of the way sensors are currently designed, and the fact that modern DSLRs are basically upgraded film SLRs.
      And what about the lens? If the sensors are omnidirectional and can simply keep reporting their state at a high frequency, the "lens" (its optical purpose) can be done in software. You just need a high density of sensors and the ability to process the information fast enough.
      Obviously, the individual sensors can't be truly omnidirectional, but rather their visibility angle would depend on the geometry of the surface they're placed on -- which could be a hemisphere, or even an almost complete sphere. As you mentioned, the angle of light would still be relevant, but this would be done on an individual sensor basis -- rather than one lens orchestrating the entire image.

      There, we solved it. Engineers, get to work!
      • > ...the "lens" (its optical purpose) can be done in software.

        No it can't. The pixels have no information about the direction from which each photon arrived. Without a lense each of your pixels will receive photons from every point in the scene with no way to sort them out.

        • by kcitren ( 72383 )
          Yes, it can. See computer assisted Tomography for an example. Or http://www.informationweek.com/news/hardware/handheld/showArticle.jhtml?articleID=222002840 [informationweek.com].
        • No it can't. The pixels have no information about the direction from which each photon arrived. Without a lense each of your pixels will receive photons from every point in the scene with no way to sort them out.

          ...and 3 lines later in the post I said: "the angle of light would still be relevant, but this would be done on an individual sensor basis -- rather than one lens orchestrating the entire image".
          A bit like fly's eye -- every sensor would only report on a single angle. Place an array of these sensors on a hemisphere (or an only slightly convex/concave surface), and make it dense enough, and you *could* do the rest in software.

    • The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor...

      I'm usually one of the last to predict the limits of future technology, but I can not imagine how this could ever be any more practical/useful than capturing/tracking every hundredth, or thousandth, or even millionth photon. I think maybe there's a lack of perspective as to the insane frequency at which photons contact a given surface area.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        It depends. In good lighting you don't need to register all photons. However in a dark room or for watching night sky each photon counts. Here is an informative article: http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html
        Human eye can actually register flash of about 90 photons (10% of them will reach the retina, so about 9 photons is enough to activate receptors). The sensitivity also depends on the wavelength.

        • I can't reach the article you posted for some reason, but from what I recall there are several species that have eyes that are designed for low-light *and* high frequency -- owls are one example. If you want resolution, then hawks are able to see a moving target, the size of a mouse, from a mile away.

          Of course, by giving any example found in nature you're setting the bar pretty high. If we wanted to replicate the functionality of the simplest plankton (like a Picoplankton), we'd probably need to construc
      • by EdZ ( 755139 ) on Wednesday February 17, 2010 @06:09AM (#31167218)
        It's of massive value in astronomy. And it's exactly whatsuperconducting image sensors [esa.int] do.
    • Re: (Score:1, Interesting)

      by Anonymous Coward

      There have already been several adaptive sensor/camera designs and prototypes proposed that adjust the integration (shutter) time independently for each pixel on the sensor so that no pixel is saturated (maxed out). Consequently knowing the per-pixel integration time and sensor value allows you to reconstruct a high dynamic range image. This design seems to be the application of the idea of binning (which has been used for noise reduction and improved dynamic range when coupled with a spatially varying atte

    • by Mashdar ( 876825 )
      My eyes have shutters. You must look pretty weird. About your holographic idea, Heisenberg (or even De Broglie) might have something to say about that. (something about the difficulty of identifying both location and movement of an electron) Also, (being from /. I have not read TFA) are they talking about staggering pixel capacitor charging across rows? Up until this point, has everyone really been doing this row-by-row? This seems sub-optimal for high-def images of moving objects, anyway.
    • The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor

      That's already been done to some extent. Put a grid of small lenses in front of your sensor and you can trace each ray back through the main lens and reconstruct a lower resolution image from any point of view, or focal length, that would have been viewable from any point within the volume of the camera.

    • Re: (Score:3, Interesting)

      by gillbates ( 106458 )

      The overwhelming majority of digital cameras do not have a shutter. You do realize that clicking sound comes not from a shutter, but from a small speaker, right?

      I'm honestly sorry I didn't patent this technique back in 2005 when I was working with digital image sensors, but suffice to say, it's been known about and used in industry for quite some time. Engineers have always known there was a tradeoff between the image resolution and frame rate, and this appears a rather obvious compromise. An image se

      • I guess I should have said "frameless" cameras (it was two in the morning...).

        Many cameras today don't have a physical shutter, but they still work (to my knowledge) by exposing the sensor to light for a significant fraction of a second, then reading the cumulative charge on the sensor elements and "flushing" the elements, and then starting over again.

        A "frameless" camera is an oversampled camera; instead of exposing the sensor for ten milliseconds or more at a time, you would take readings more than once p

    • The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor, so that the zoom, exposure time, and focus can be changed in post-processing (as well as a lot of other cool stuff).

      No, the ultimate sensor would record the quantum wavefunctions of the "photons", rather than the collapse of them. Then you could.. well I'm not sure what that would allow, but it's clearly capturing more information.

    • That'd be great for the purposes of mimicking biological eyes, i.e. a sensor for a general-purpose robot to keep data rates down. However, it wouldn't be good for photography or cinematography as an art at all. The idea is to capture both the minute details and the fast motion at the same time to create an experience that's a little different from reality. It's like HDR or UV photography. Our eyes don't have that kind of capability, yet we still would want to take such pictures.

  • This sounds like something I remember a flatmate talking about previously; there is a free software program that did this. You took a few low-resolution pictures, ran them through the program, and got out a high resolution image. The same can be done with a video (as the low-resolution pictures).

    I can't recall the name of the program, will have a hunt for it.

  • Is Slashdot not about open access? I read enough complaints from scientist bloggers about having to be on-campus in-the-office or-else-pay for articles they have a subscription to.

    A little research back to the researchers could doubtless second-source the information; I regularly see the authors post the articles themselves or at least an informative link: http://www.isis-innovation.com/licensing/3268.html [isis-innovation.com]

  • So when do I get a firmware update to turn my HandyCam into a high-speed video monster?

"A child is a person who can't understand why someone would give away a perfectly good kitten." -- Doug Larson

Working...