Scientists Use Camera With Human-Like Vision To Capture 5,400 FPS Video (petapixel.com) 66
An anonymous reader quotes a report from PetaPixel: A team of scientists from the Swiss Federal Institute of Technology in Zurich (ETH Zurich) have figured out how to capture super slow-motion footage using what's called an "Event Camera." That is: a camera that sees the world in a continuous stream of information, the way humans do. Regular cameras work by capturing discrete frames, recapturing the same scene 24 or more times per second and then stitching it together to create a video. Event cameras are different. They capture "pixel-level brightness changes" as they happen, basically recording each individual light "event" as it happens, without wasting time capturing all the stuff that remains the same frame by frame.
As ETH Zurich explains, some of the advantages of this type of image capture is "a very high dynamic range, no motion blur, and a latency in the order of microseconds." The downside is that there's no easy way to process the resulting "footage" into something you can display using current algorithms because they all expect to receive a set of discrete frames. Well, there was no easy way. This is what the folks at ETH Zurich just improved upon, developing a reconstruction model that can interpret the footage to the tune of 5,000+ frames per second. The results are astounding: a 20% increase in the reconstructed image quality over any model that existed before, and the ability to output "high frame rate videos (more than 5,000 frames per second) of high-speed phenomena (e.g. a bullet hitting an object)," even in high dynamic range "challenging lighting conditions." Their findings have been published in a research paper titled High Speed and High Dynamic Range Video with an Event Camera.
As ETH Zurich explains, some of the advantages of this type of image capture is "a very high dynamic range, no motion blur, and a latency in the order of microseconds." The downside is that there's no easy way to process the resulting "footage" into something you can display using current algorithms because they all expect to receive a set of discrete frames. Well, there was no easy way. This is what the folks at ETH Zurich just improved upon, developing a reconstruction model that can interpret the footage to the tune of 5,000+ frames per second. The results are astounding: a 20% increase in the reconstructed image quality over any model that existed before, and the ability to output "high frame rate videos (more than 5,000 frames per second) of high-speed phenomena (e.g. a bullet hitting an object)," even in high dynamic range "challenging lighting conditions." Their findings have been published in a research paper titled High Speed and High Dynamic Range Video with an Event Camera.
I remember proposing this exact thing... (Score:3)
... several times in Slashdot over the years (most recently, here [slashdot.org]). Great to see it actually getting some research! Now I actually have a proper term ("Event Camera") for it. :)
No, this is not analog! (Score:1)
Unless your analog camera had continuous film too, and not physical picture frames.
Re:I remember proposing this exact thing... (Score:5, Interesting)
I'll note that I do envision this going further (as I've written in the past). A particularly important one would be recording data in absolute polar coordinates, along with a path for the camera over time, so that rotation of the camera does not spam you with new data for each pixel. E.g. if column 1 of your "CCD" corresponds to 5,86 degrees east of due north, and the camera is rotated so that column 2 of the "CCD" now corresponds to to 5,86 degrees east of due north, all that changes is that column 2 is now storing data to where column 1 was storing it previously. You only want things that actually change in the scene to trigger refresh events. Ideally the metadata would allow multiple camera sources to be included into the same scene, which would aid in stereoscopy or true 3d reconstruction. Additionally, datastreams wouldn't be strictly confined to "RGB", but would allow for whatever spectral windows the camera wants to record.
As described previously, compression becomes trickier with an event camera. Realistically you want A) to record changes in light intensity as splines, and only trigger a new datapoint when a single spline can no longer accurately represent the light intensity curve that's been observed since your last datapoint; B) trigger new datapoints for adjacent pixels that "nearly need to switch splines", so that you can write out data in blocks, thus reducing per-pixel overhead; and C) bundle as many blocks into a single time writeout as possible, thus reducing per-timestamp overhead. There are tradeoffs, of course - (B) for example creates datapoints sooner than they might be needed, while (C) reduces your temporal resolution (your worst case being "everything written all at once", aka, frames).
Realistically, CCDs are poorly suited for an event camera, as they read out everything in rows, all pixels at once. You really want an event-driven paradigm in the sensor hardware itself - the hardware itself accumulating light-curve splines and triggering write-out events when the deviation of the light curve can no longer be represented with a single spline. Likewise, for compression, you want datapoint write-outs from one pixel to trigger new datapoint write-outs from blocks of their neighbors that are almost ready, and potentially from nearly-ready non-adjacent blocks as well - all at the hardware level. You want your bus to simply listen for events from your sensor, bundle together all newly-read data into a compressed format with all required metadata, and to write them out.
Re: (Score:3)
A particularly important one would be recording data in absolute polar coordinates, along with a path for the camera over time, so that rotation of the camera does not spam you with new data for each pixel. E.g. if column 1 of your "CCD" corresponds to 5,86 degrees east of due north, and the camera is rotated so that column 2 of the "CCD" now corresponds to to 5,86 degrees east of due north, all that changes is that column 2 is now storing data to where column 1 was storing it previously.
If you're putting that level of smarts into the sensor itself, then maybe consider following the human system again and perform a certain amount of image processing/recognition at that level before passing the data down to the controller? Would probably help with your compression problems while you're at it.
Re: (Score:3)
Indeed, a sort of "pass your data to the next pixel over" event (which would be triggered in response to rotation of the camera) is not only possible to implement in hardware, it's what CCDs already do [wikipedia.org] in order to readout. :)
Re: (Score:2)
In short, in a way, what's being discussed is rather like a hardware neural net, where you have activation potentials, and a "synapse" from one pixel can trigger "synapses" from those that it's connected to. In the analogy, "frequently synapsing pixels" correspond to those whose light data is undergoing significant, unpredictable change (and some of those adjacent to them), while "seldom synapsing pixels" correspond to those in areas that are seeing little change to what they're recording.
Re: (Score:2)
You may wish to take a look at DFPA technology [mit.edu]: these have an ADC (and sometimes additional flexible processing) under each pixel, streaming photon counts and timestamps.
Also related are APD arrays [nasa.gov]; a few dozen pixels [first-sensor.com] that are typically individually queried at GHz rates. Useful for low-light applications like LIDAR, where they simply record the timestamp of incoming photons. Postprocessing then converts that to a ranged image. Using this can generate things [mit.edu] like large-scale 3D maps of an area [mit.edu].
Re: (Score:2)
IIUC, from the summary, they didn't invent the "event camera", they invented a way to process the output to make it useful.
Re: (Score:2)
That's great Rei. If you also invent a time machine, you could patent the event camera before the first commercially available ones hit the market in 2008: thttps://ieeexplore.ieee.org/document/4444573
Re: (Score:2)
Are the single pixels still sampled? (Score:3, Insightful)
I can see this as a much better way to capture video. Basically every pixel is recorded as if it was an audio stream, and only the deltas are stored, with an extreme sampling rate, like SACD. Then compress away everything where nothing changes, using an algorithm that understands motion compensation based on per-pixel motion quaternions.
But it would still be samples. Quantitzed in X, Y, time and brightness.
So not continuous. But the next best thing: A wave function that goes through all the sample points, to interpolate a continuous wave above the nyquist frequency of human vision. At least in the time/brightness space.
Re: (Score:2)
More coverage from IEEE (Score:2)
Re: (Score:2)
First, yes. Better....that depends on what your goal is. E.g. if you want to discriminate a precise frequency, the mammalian eye is a lousy instrument, though better than the planarian eye (spot). But you can build a camera that will react to changes in just one frequency of light. Of course, it will be totally insensitive to all the other frequencies (if it's well designed).
So better depends on your purpose. Apparently this approach can photograph a bullet impacting a wine glass, and capture the event
Re: (Score:2)
Indeed, event cameras are only half of the equation; the other half is event displays. But you can at least convert event camera videos to frame-based videos, for playback either with frame-dropping or slow motion.
Breakthrough (Score:2)
This is a magnificent breakthrough, and could potentially lead to significant improvements to video capture and photography in general. Since the problems are mostly on the interpretation side (figuring out what the output of the stream of pixel events reconstructs to), this is a tractable problem with current technology.
Hopefully it really is a usable advance, and won't end up a technical footnote like the Lytro [wikipedia.org].
Re:Breakthrough (Score:4, Interesting)
Light field data is still very cool, even though Lytro's focus on "mediocre cameras that let you refocus your pictures" was rather weak. It'd be particularly nice for 3d scene reconstructions (for example, for self-driving vehicles, drone navigation, etc). I'd hope that formats for event cameras be designed to allow for optional vector data to be associated with light events.
Re: Breakthrough (Score:2)
I have a feeling this will stick around. First, think of security cameras, all that you need for motion detection is a measurement of bandwidth used. Second, consider a drone flying along in a line, filming the ground looking for driving cars. If you can make the unchanging state equal to the moving ground, you can find the moving cars by bandwidth the same way. Third, what I just described, prediction of next frame with deviation, is how h264 and every video compression going forward works, so itâ(TM)
Re: (Score:2)
Exactly. What I've always envisioned is that each pixel fits a spline to its light curve; data writeout is only triggered when it can no longer match its light curve to a single spline (or when forced to do a writeout by other pixels for compression reasons). A pixel one place in the scene may be getting writeouts triggered thousand times a second because everything is in chaos, while other parts of the frame sit around for seconds at a time with no writeout.
And since you're writing out splines, you also ha
Re: (Score:2)
This reminds me of mp3 decoders that can output 24-bit values even if the original was recorded at 16 bits; the data was stored as sinusoid curves.
A related idea that comes to mind is storing video as a 3D Fourier transform. I had this idea in 2002 while working on image recognition software, and I thought it would make it easy to balance between temporal and spatial precision automatically depending on the content. Later, I learned that Ogg Tarkin was being developed at the same time using the same idea
Re: (Score:2)
Yeah, it's impressive how good vision-only systems are getting at 3d scene reconstruction. Nowadays, not only do you have simple pattern-matching parallax distance measurements working for you, but the systems can use the same sort of "logic" that we humans use to avoid getting tricked into mismatching patterns and getting distances wrong. For example, if the neural net sees a "truck", it knows roughly how big trucks should be at a given distance, so if its distance calculations say that it's looking at e
Continous? (Score:2)
"a camera that sees the world in a continuous stream of information, the way humans do."
Umm no. Unless its using entirely analogue electronics it'll have a microcontroller with firmware which ultimately ticks to a clock. It may be a very very fast clock but its still discrete time, not continuous.
Re: (Score:3)
Even time and space are discrete when you get to small enough values.
You don't have to go that far. At some point the time step is small enough to capture highest frequency, and value step is good enough to capture all the signal/noise.
Re: (Score:2)
Multiple aynchronous clocks are still clocks, its still not continuous time.
No reference frames? (Score:2)
A stream of only delta events of any material length will be difficult to work with (e.g., randomly access), and immensely fragile (e.g., once an errant event is sent and/or an event is lost/corrupted in transmission, the image will be corrupted for the foreseeable future). This is the same general reason (rational) incremental backup plans include periodic full snapshots, MPEG streams contain periodic I-frames, etc.
Though I haven't had time to fully tear it apart, I didn't see this directly addressed in t
Re: (Score:2)
They're doing away with "frames", mmkay?
The data stream that's captured is indeed processed into frames, that's the breakthrough.
This is what the folks at ETH Zurich just improved upon, developing a reconstruction model that can interpret the footage to the tune of 5,000+ frames per second.
Exactly what Carmack was thinking (Score:4, Informative)
Regular cameras work by capturing discrete frames, recapturing the same scene 24 or more times per second and then stitching it together to create a video. Event cameras...basically recording each individual light "event" as it happens, without wasting time capturing all the stuff that remains the same frame by frame.
Wow. That makes a lot of sense. And it's the same exact thought process John Carmack had when he tried to program Super Mario Bros. 3 for the PC [youtube.com].
For those that don't know the story, the PC had really poor side-scroller games back in the 80s and early 90s, because graphic engines at the time rendered graphics frame-by-frame. Because of the slow nature of CPUs, and because they were both processing programming code AND video rendering, they couldn't do more than about 6 frames per second. Carmack came up with the novel idea of rendering graphics not by frame, but by the changes happening pixel-by-pixel. Since not every pixel required updating every frame, the CPU could render far more frames per second.
As proof of concept, Carmack programmed a version of Super Mario Brothers 3, to show Nintendo that they could create a port of the game on the PC and boost their sales. Nintendo had no interest in moving their flagship game to a non-Nintendo console, and declined Carmack's offer. So, he instead made ID Software, and the rest is history. (A more thorough video explaining the history can be found here. [youtube.com])
It's official: Humans can see beyond 60 FPS. (Score:1)
^
Or 30 FPS. Depending on how retro your opinion is. lol
Re: (Score:1)
>_ Or 30 FPS. Depending on how retro your opinion is. lol
Or 24 FPS if you're even older... LOL... er, that means I'm old 8-[
Anyway, there's more than just being able to see: I get sick after playing a little DOOM (that original DOS version, but IIRC the same occurs on Linux). Quake I is remarkably OK for me. I skipped some great games because of that (for the record, I've had motion sickness for my entire life... it does not bother me much when I drive, though).
I can play Nexuiz but Xonotic makes me a li
Re: (Score:1)
Damn. This is the best AC comment I've ever read.
I got a friend who told me that she can't see the difference between 30 and 60 fps. Even moving a mouse around..
I was like Homer slowly backing away lol.