Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science Technology

Canon Unveils 120-Megapixel Camera Sensor 289

Barence writes "Canon claims to have developed a digital camera sensor with a staggering 120-megapixel resolution. The APS-H sensor — which is the same type that is used in Canon's professional EOS-1D cameras — boasts a ridiculous resolution of 13,280 x 9,184 pixels. The CMOS sensor is so densely packed with pixels that it can capture full HD video on just one-sixtieth of the total surface area. However, don't hold your breath waiting for this baby to arrive in a camera. Canon unveiled a 50-megapixel sensor in 2007, but that's not made it any further than the labs to date." It's probably not going too far out on a limb to say that the any-day-now rumored announcement of an update to the 1D won't include this chip, but such insane resolution opens up a lot of amazing possibilities, from cropping to cheap telephoto to medium and large format substitution. Maybe I should stop fantasizing about owning a full-frame 1D or 5D and redirect my lust towards 120 megapixels.
This discussion has been archived. No new comments can be posted.

Canon Unveils 120-Megapixel Camera Sensor

Comments Filter:
  • by jvillain ( 546827 ) on Tuesday August 24, 2010 @02:10PM (#33358600)

    Cannon makes some awesome lenses. You just can't buy them in the toy department at Best Buy. The problem with high density sensors is that the denser they get the higher the noise level becomes. I think that is one of the reasons that Cannon isn't tripping over them selves to ramp up the Megapixal count that fast.

  • by ZenShadow ( 101870 ) on Tuesday August 24, 2010 @02:32PM (#33358986) Homepage

    Been there, done that, believe it was patented by iPIX. Not sure who holds it now since they're gone AFAIK...

    Seriously, they used this to do those 3D virtual tours.

  • by Anonymous Coward on Tuesday August 24, 2010 @02:35PM (#33359026)

    The problem with modern digital cameras is that they are diffraction limited, http://en.wikipedia.org/wiki/Diffraction-limited_system [wikipedia.org], by the laws of physics, or very nearly so, given current day lens technology. There is no way you will get a higher actual resolution without going to lenses, which are significantly larger in diameter than what we are used to in dSLRs. So adding more pixels in the area of the sensor of the latest Canon 1D models is completely pointless, which is why we haven't seen an update yet featuring higher resolution.

    In other words: Keep dreaming. These new detectors are just marketing gimmicks, or intended for specialist scientific applications, like astronomy.

  • by delta407 ( 518868 ) <slashdot@nosPAm.lerfjhax.com> on Tuesday August 24, 2010 @02:36PM (#33359042) Homepage

    A more substantial problem is that diffraction limits the effective resolution of an optical system to well above the size of each of these pixels. This is a problem with current sensors at narrow apertures; lenses exhibit a measurable loss of sharpness, typically f/11 and up, because the airy disks expand as the aperture contracts. With hugely dense sensors like this, though... plugging some numbers into a website that explains the whole situation [cambridgeincolour.com] suggests that you'd need to shoot with apertures than f/1.8 to get circles of confusion smaller than the size of a single pixel.

    That's right--even "fast" f/2.8 lenses are limited by physics to never being able to project detail onto individual pixels. You could potentially add a deconvolution stage in software to recover additional sharpness, but not in hardware.

    Another thing. Do the math: the pixels are 2.1 micrometers square. Compare to trichromatic human vision, which detects red light peaking at 564 nanometers, 0.564 micrometers. The size of a pixel is within a factor of four of the wavelengths they measure. Staggering.

    Glass isn't the problem. We need new laws of nature, since we're near the edges of the ones we have now.

  • by RemyBR ( 1158435 ) on Tuesday August 24, 2010 @03:32PM (#33359976) Homepage
    Your father's lens is probably in need of calibration. I use one of those and it shows none of this even when wide open. Or there's a change he got a bad copy, in which case calibration would still help, but not much.
  • by spun ( 1352 ) <loverevolutionary@@@yahoo...com> on Tuesday August 24, 2010 @03:53PM (#33360304) Journal

    Lots of people saying I'm right, too.

    Dynamic resolution and dynamic range are the same thing. If you take the value of one pixel, it will be three integers. If you average the value of several adjacent pixels, you will have three reals. There are more real numbers between 0 and 255 than there are integers between 0 and 255, therefore, the range of values has increased. (0,0,0) is still pure black, and (255,255,255) is still white, you can't get any blacker than black or whiter than white, you know. But using reals, you have more values between black and white than you did, and therefore, more dynamic range.

    Looked at another way, lets say a pixel is almost zero, or black. Using one pixel integers, it would round down to black, but averaging more than one pixel, one might find it wasn't quite black anymore. We have something between zero and one, i.e. greater dynamic range.

  • by danpbrowning ( 149453 ) on Tuesday August 24, 2010 @04:22PM (#33360730)

    Actually, the "sensitivity" (more specifically, e-/m^2) is generally the same across a huge variety of pixel sizes, thanks to microlenses. What is not usually constant is read noise (AKA "high ISO noise", sometimes also referred to as "sensitivity"), because although it does naturally shrink a little bit as the pixel size is reduced, it's not always in exact linear proportion with pixel diameter, and therefore the generalization that smaller pixels tend to have slightly more noise in low light.

  • by bws111 ( 1216812 ) on Tuesday August 24, 2010 @04:26PM (#33360790)

    Resolution and range are not the same thing. Resolution is the number of increments within the range. Range defines how dark your darkest area can be compared to how bright the brightest area can be. Resolution is the number of shades of grey between black and white. If you have some areas of the picture that exceed the blackest black and others that exceed the whitest white you don't have enough range, and averaging pixels can not correct that.

  • by danpbrowning ( 149453 ) on Tuesday August 24, 2010 @04:30PM (#33360874)

    Dynamic range is the distance between clipping and noise. The standard engineering definition assumes a SNR of 1:1 as the lower bound, but few photographers can tolerate that much noise and usually prefer 4:1 or 8:1. Random noise sources add in quadrature so that downsampling the pixel count by a factor of four increases the SNR by a factor of 2. A better way of thinking about it is this: the raw data from an image sensor has a noise power that increases linearly with spatial frequency (level of detail). Higher frequency (smaller details) have higher noise powers. If you throw away the high frequency detail, the noise goes with it. In actual practice, there are many better ways to reduce noise than by throwing away detail, and in any case, many viewers will prefer a detailed but noisey image over a blurry but less noisy one.

  • by Peeteriz ( 821290 ) on Tuesday August 24, 2010 @04:43PM (#33361138)

    In principle, you get the exact same result or worse as with a cheaper sensor with less resolution where each pixel is simply 4 times larger and gets 4 times the light for the dark areas, and the bright parts will be maxed out anyway. And HDR usually means a much larger exposure difference than simply 4 times - say, 10 stop difference is a 2^10 ~= 1000 times more light for the dark parts.

  • by danpbrowning ( 149453 ) on Tuesday August 24, 2010 @04:51PM (#33361292)

    That's right--even "fast" f/2.8 lenses are limited by physics to never being able to project detail onto individual pixels.

    That is incorrect. Parity between the airy disk and pixel diameter is not the point at which additional detail becomes impossible -- that is only one point on the curve of diminishing returns. In other words, it is the difference between the "diffraction limited" spatial frequency and the "diffraction cutoff" spatial frequency. It is only the latter that denotes the impossibility of further resolution from decreased pixel size.

    The easiest way to understand this is to look at MTF. When diffraction causes the optical system MTF to drop to 50%, most would consider that the end of the line. But in fact, that is just the point where a lot of contrast is lost -- detail is still there and contrast can be restored with sharpening (e.g. RL deconvolution). MTF must drop to 10% before detail truly becomes extinct, and for a 2.2 micron pixel like this 120 MP Canon, f/5.6 will still give you 18% MTF, and there are a host of lenses that are very sharp at f/5.6.

    For further consideration, look at the effect of the anti-alias filter, which drops MTF of spatial frequences far lower than needed to suppress aliasing. The ideal solution to this problem is pixels that are so small that diffraction itself anti-aliases. That will increase contrast at lower spatial frequencies by 30%.

  • by John Whitley ( 6067 ) on Tuesday August 24, 2010 @05:25PM (#33361888) Homepage

    I'd bet that you could use that many megapixels to seriously boost dynamic range by averaging several adjacent pixels into one.

    Simply put: no. Software "averaging" may smooth out noise, but it will not add information that was not present in the first place. Missing dynamic range at the hardware is just not there to be recovered in software. In digital camera sensors, dynamic range is limited by saturation of the sensor's photosites [cambridgeincolour.com]. Once a photosite has collected enough photons, it registers maximum charge -- information about any further photons collected at that photosite during the exposure is lost. In fact, adding more photosites per unit area increases the per-photosite noise and chip areal overhead. Noise reduces dynamic range at the low end, and less charge capacity per photosite reduces dynamic range at the high end.

    As another poster notes, you might change the effective exposure received by each photosite (perhaps by Bayer-array like neutral-density filtering). Or you can do what Fuji did with the S3 pro [danlj.org]: make a matrix of photosites of different sizes/sensitivites to improve dynamic range. Fuji's sensor, while nice, has hardly taken over the digital imaging world.

    On a more constructive note, Ctein wrote up a nice exposition on The Online Photographer about both near-term [typepad.com] sensor technologies entering production and long-term [typepad.com] avenues for advancement in digital imaging technology.

  • by spitzak ( 4019 ) on Tuesday August 24, 2010 @05:44PM (#33362180) Homepage

    Yes as I said below, averaging a lot of pixels would lower the noise floor and increase the range. However it increases the range by far, far less than if you used those N pixels for N different-exposed shots and this sort of huge range increase is normally what is meant by "HDR".

  • by GlassHeart ( 579618 ) on Tuesday August 24, 2010 @05:59PM (#33362388) Journal

    It's not possible to get more range out of a single exposure, because the range is inherent in the capture based on how much light you choose to let in, and how sensitive your sensor is to that light. Dynamic range refers to the difference between the brightest and the darkest pixel the sensor can distinguish in that exposure. Beyond the bright end of the range, they all look the same white to the sensor. Beyond the dark end of the range, they also all look the same black.

    Here's how HDR works, oversimplified. We take a shot where we meter the bright part, so that it'll be properly exposed, deliberately sacrificing the dark parts. All dark pixels will be black in this exposure because we didn't let in enough light for the sensor to make out the difference. We then take another shot where we meter the dark part, sacrificing any somewhat bright parts. All bright pixels will be white because we let in too much light for the sensor to make out the difference. If we then combine the two images by throwing away the dark parts of the bright shot and the bright parts of the dark shot, we get an composited image that has more range than either image alone, i.e., HDR. Note that no averaging is involved.

    The alternate solution ceoyoyo is talking about requires a different kind of sensor. Imagine if you had two kinds of pixel sensors, one sensitive and the other insensitive. You'd alternate them on your sensor, perhaps in a checkerboard pattern, but basically pairing adjacent sensitive/insensitive pixels. Now, if your sensitive pixel registers too high a value, then it's probably blown out so use the value from the insensitive one (which is by definition not as bright). If the insensitive one registers too low a value, then it's probably too dark, so use the sensitive one (by definition not as dark). The crucial difference here is that you choose one over the other, and never average. If all you did was average, then the result is the same as using a single kind of pixel sensor with a sensitivity in the middle, and would not improve your dynamic range.

  • To expand on this... (Score:2, Informative)

    by Estanislao Martínez ( 203477 ) on Tuesday August 24, 2010 @08:15PM (#33363862) Homepage

    I think this merits explanation in a bit more length.

    Nearly all digital cameras have Bayer array sensors, where each photosite only records the value for one of the three RGB color channels. A 12MP Bayer array camera produces full-color images with 12 million RGB pixels, but that overstates the amount of information that the sensor captures by 3x; for each pixel in the resulting image, only one of the three channel's value was actually directly recorded from the scene, and the other two channels' values were interpolated from the values of adjacent pixels that recorded the missing channels.

    Or, the quick way to put it, a 12 megapixels Bayer array camera is really 6 green megapixels, 3 red and 3 blue. This has several consequences:

    1. The sensor is susceptible to color moiré artifacts at its resolution limit. To avoid those artifacts, typically there is an optical anti-aliasing filter in front of the sensor that blurs the image a little bit, so that some of the light that would have fallen on only one photosite is spread to hit adjacent ones. This comes at a resolution cost.
    2. The effective resolution that you can get varies with the color of the subject. There's a good discussion of this effect at this page [ddisoftware.com]. But basically, if you're photographing a strong red or blue subject, your 12MP camera is closer to a 3MP camera.

    These two things mean that you can get resolution improvements from putting more photosites on a Bayer sensor, even if the size of the individual pixels is smaller than the circle of confusion of the lens.

    Imagine if the length of the side of the photosite coincided exactly with the diameter of circle of confusion. This means that a point on the subject that aligns perfectly with the center of a photosite is going to project entirely inside that photosite. Now assume that point of light is pure red. If the photosite is a red-sensitive one, the sensor then records the fact that the point has a strong red component, but it can't tell if it has a green or blue component. If the photosite is green-sensitive, then the sensor records the fact that the point has no green component, but it can't tell whether it has a red or blue component.

    Now, however, imagine that the photosite is smaller than the circle of confusion. Then some of the light is spilling over to adjacent photosites--which means that you record a value for all three color channels for that point on the subject. This makes it easier to infer the values of the missing channels at the pixel that corresponds to that photosite, because the adjacent photosites will have recorded it.

    So, making the pixels smaller beyond the lens' diffraction limit lets you (a) use a weaker anti-alias filter on the sensor (or none at all); (b) gives you more consistent resolution for subjects of different colors. If you go all the way, you'd make your sensor have 4x the amount of photosites as the number of pixels in the output images: e.g., you'd build a camera with a 60MP Bayer-array sensor but output 15MP images, using 4 photosites per output pixel (and no antialias filter). That would outperform today's 15MP cameras.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...