Researchers Create Focus-Free Camera With New Flat Lens (phys.org) 35
Using a single lens that is about one-thousandth of an inch thick, researchers have created a camera that does not require focusing. Phys.Org reports: "Our flat lenses can drastically reduce the weight, complexity and cost of cameras and other imaging systems, while increasing their functionality," said research team leader Rajesh Menon from the University of Utah. "Such optics could enable thinner smartphone cameras, improved and smaller cameras for biomedical imaging such as endoscopy, and more compact cameras for automobiles." In Optica, The Optical Society's (OSA) journal for high impact research, Menon and colleagues describe their new flat lens and show that it can maintain focus for objects that are about 6 meters apart from each other. Flat lenses use nanostructures patterned on a flat surface rather than bulky glass or plastic to achieve the important optical properties that control the way light travels.
"This new lens could have many interesting applications outside photography such as creating highly efficient illumination for LIDAR that is critical for many autonomous systems, including self-driving cars," said Menon. The researchers say the design approach they used could be expanded to create optical components with any number of properties such as extreme bandwidth, easier manufacturability or lower cost.
"This new lens could have many interesting applications outside photography such as creating highly efficient illumination for LIDAR that is critical for many autonomous systems, including self-driving cars," said Menon. The researchers say the design approach they used could be expanded to create optical components with any number of properties such as extreme bandwidth, easier manufacturability or lower cost.
6 meters? (Score:2)
I can have even more depth of field than that with an ordinary camera with small f stop. 6 meters is a very different thing than "not needing focus" anyway. Marketing wanks must be advising them.
Re:6 meters? (Score:5, Interesting)
Context is everything - a 6m depth of field at 10m is a very different claim than a 6m depth of field at 0.1m. In this case they offered no further information to the 6m claim, but it does sound like they maintained focus from 5mm to 1200mm, which is a pretty crazy range.
A small f-stop does indeed increase depth of field, but it also reduces aperture size, which means you need a much longer exposure - f/16 lets in only 1/64th as much light as f/2. Which can be a real problem if you're trying to photograph moving objects or in low light.
Plus, I would imagine that that there's a good chance that you could *also* adjust your f-stop with this new lens, so they're manipulating an independent variable - a lens with a much larger native depth of field, plus a small f-stop, would have a crazy large depth of field.
Re: (Score:3)
"they offered no further information to the 6m claim" Actually, the original paper does; it says, "Finally, in order to demonstrate the imaging potential of our MDL, we imaged a scene containing objects spanning a large DOF from 200 to 5943 mm (see Fig. 5). A conventional lens will not be able to keep all the objects in focus over such a large DOF. However, the MDL is able to take a single image where all the objects are in focus." I can't reproduce figure 5 here, of course, but both the original paper an
Re:6 meters? (Score:5, Insightful)
...except that you haven't included nearly enough information to be useful. For example, interested parties may want to know these points:
"Specifically, when illuminated by collimated light at a wavelength of 0.85 micrometers, the MDL produced a beam, which remained in focus from 5 to 1200 mm. "
"By only constraining the intensity to be focused in a large focal range and allowing the phase within this focal range to vary, we can solve a nonlinear inverse problem via optimization..."
"For a conventional lens with fixed focal length ( f ), the object distance (u) and image distance (v) are related by the formula 1/u + 1/v = 1/ f .
If either of u or v changes, then the other has to be adjusted to capture a focused image. For example, if the object moves closer to the lens (decreasing u), then the sensor must move away from the lens (increasing v). Since the focal length of our ExDOF lens is not a fixed value, we can image objects at different distance (changing u) without necessarily moving the sensor (fixed v)."
"By recognizing that the lens is primarily used for intensity imaging, we can treat the phase in the image (or focal) plane as a free parameter. Thereby, we can generate a phase-only pupil function that when imprinted on a beam results in a focus that can remain close to diffraction limited over a distance that is orders of magnitude larger than that of the conventional lens."
Yes, context is everything, and if your context is imaging using an infrared laser as your only light source then this might interest you.
Regarding the grandparent's pearl-clutching over f-stop, this is addressed in equation (3) in section 2 which states in part:
"This enhancement is several orders of magnitude larger than anything that has been demonstrated before..."
What is important is DOF w.r.t. the diffraction limit and this design represents a radical improvement. Smaller apertures improve DOF but lower the diffraction limit. In other words, pinhole cameras have great DOF but terrible resolution. This design offers vastly improved resolving power for the DOF it provides.
Also, it is trivial to "create a camera that does not require focusing". The VERY FIRST cameras did not require focusing. Application and performance matters, so screw the authors of the article, and the editors at /., for failing to understand something so rudimentary.
TLDR: Is a lens design for focusing lasers and similar monochrome light sources. Utterly uninteresting to conventional photography.
Re: (Score:2)
The individual sensors in digital cameras only work with monochrome light sources. We get colour by using 3 of them. We've kept the 3 very close together as aligning the pixels across 3 different sensors was impossible ... until recently. Now Google Pixels now do just that. Not with 3 sensors separated in space, but with multiple pictures separated in time (and varying exposure levels, and accounting for camera shake).
Re: (Score:2)
I don't think adjusting the f-stop on one of these lenses would improve the depth of field. The analogy isn't perfect, but a metamaterial lens optimized for wide depth of field sort of acts like a bunch of very small lenses. You get the depth of field of a small aperture, but the light gathering capability of a large one.
You're right though, the article's citing a 6 m dof is kind of silly since depth of field doesn't have a linear relationship with distance. It's easy to get any lens to have a depth of fiel
Car on a stick (Score:2)
The car on a stick is just about the greatest use of low tech in a high tech. presentation.
(Supplementary material: https://osapublishing.figshare... [figshare.com] )
So this'll be arriving... (Score:2)
... about the same time as all the amazing new battery technology we keep hearing about.
Also, given the description - I have to wonder how much of this is just taking advantage of hyperfocal distance. With the right choice of aperture and focal length, a conventional wide-angle lens can take photos with items acceptably in focus from a couple meters to infinity already.
Re: (Score:2)
Re: (Score:2)
At least we can look up his moniker and see his general posting information content. Unlike you, AC.
Re: (Score:2)
Don't waste your time making any kind of effort, we here at /. much prefer posts made in complete ignorance...so thank you.
A progression of science. (Score:4, Informative)
I think it's important that people realize that this is an incremental step in the research that has gone into the development of flat lenses. Phys.org alone has many articles on flat lenses [phys.org] but there are lots of people working on this and similar problems.
Science does not advance by leaps and bounds but by millimeters. Many failures have lead up to a single success.
Re: Congratulations you reinvented the pinhole cam (Score:2)
But but ,the article has the word nano in it, it must be bleeding edge tech the like of which the world has never seen before!
Re: (Score:3)
You should read the original article before jumping to conclusions.
Re: (Score:2)
Read Section 2 for an answer. You don't know because you haven't spent even a moment looking at what is published.
They didn't make the aperture small. (Score:5, Informative)
How is this new? Make the aperture small and voila, everything is in focus. This was how the first film cameras worked.
They didn't make the aperture small. They can collect all the energy hitting a large aperture and focus it all.
What they did was take advantage of the fact that image plane optical sensors (film, photodiode arrays, ...) only measure the intensity of the light in the pixels and don't care about its phase. So by dropping the requirement that different pixels have the same relative phase they could compute a phase plate (of which a fresnel lens is one example) which could use this additional freedom to improve some other aspect of focusing. They chose to go for arbitrary depth of field, and came up with an algorithm for choseing the random-looking distribution of phase shifts / thicknesses of the material that did the job.
Sure (Score:2)
I can have a depth of field from 4 meters to infinity with a 22 aperture and a cheap non-flat objective,
That doesn't mean shit in the article.
Re: (Score:2)
A long way off from being available (Score:3)
When I RFTA, I see:
The researchers demonstrated the new lens using infrared light and relatively low numerical aperture
I then went to the actual paper and the testing was done at 850nm (0.85um) which is not anywhere close to a full light spectrum that would be needed for the wide Which tells me that they've got a long way to go before it's usable with visible light as a practical camera lens.
When I first read the article, I wondered about the suggested application with Lidar and with this being demonstrated in a single wavelength IR, I can see it's usefulness there.
Re: (Score:2)
When I first read the article, I wondered about the suggested application with Lidar and with this being demonstrated in a single wavelength IR, I can see it's usefulness there.
If six meters is the limit of detection, at what velocity would two oncoming cars in line to collide be correctable? Collision at a normal angle? And all between?
In other words, the greater the velocity and intertia the less likely six meters is sufficient to avoid collision. And that's the problem posed in the absence of practical constraints of other cars on the road using the same detection system, road width, existence of a shoulder...
Re:A long way off from being available (Score:4, Insightful)
But you have missed the greater point.
Yes, the light source is "not anywhere close to a full light spectrum", it is in fact a single wavelength and is COHERENT. Full spectrum light sources cannot be coherent and this lens approach depends on coherency.
It's worse than "a long way to go", it is "completely does not apply to conventional photography".
Coherency not an issue. (Score:3)
Yes, the light source is "not anywhere close to a full light spectrum", it is in fact a single wavelength and is COHERENT. Full spectrum light sources cannot be coherent and this lens approach depends on coherency.
As I read it the coherency of the light is an artefact. It should work fine on an incoherently illuminated image - because each pixel has a particular phase and it explicitly doesn't depend on the relative phases of different pixels - which is exactly what it traded away to get depth of field.
Wha
Re: (Score:3)
The light is *collimated*, not coherent. Collimated means the light rays are parallel. Light from stars is very well collimated, and light from the sun is often close enough for many optics applications.
Canon already has DO lenses at market (Score:3)
Compound eyes? (Score:2)
Am I looking at the eyes of a fly?
Field of View (Score:2)
No mention of the FOV. Maybe it's very narrow and that's why it would be good for medical apps?