Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science Technology

Canon Unveils 120-Megapixel Camera Sensor 289

Barence writes "Canon claims to have developed a digital camera sensor with a staggering 120-megapixel resolution. The APS-H sensor — which is the same type that is used in Canon's professional EOS-1D cameras — boasts a ridiculous resolution of 13,280 x 9,184 pixels. The CMOS sensor is so densely packed with pixels that it can capture full HD video on just one-sixtieth of the total surface area. However, don't hold your breath waiting for this baby to arrive in a camera. Canon unveiled a 50-megapixel sensor in 2007, but that's not made it any further than the labs to date." It's probably not going too far out on a limb to say that the any-day-now rumored announcement of an update to the 1D won't include this chip, but such insane resolution opens up a lot of amazing possibilities, from cropping to cheap telephoto to medium and large format substitution. Maybe I should stop fantasizing about owning a full-frame 1D or 5D and redirect my lust towards 120 megapixels.
This discussion has been archived. No new comments can be posted.

Canon Unveils 120-Megapixel Camera Sensor

Comments Filter:
  • by Greymist ( 638677 ) * on Tuesday August 24, 2010 @01:01PM (#33358440)
    I'm just curious what this would be like in low light settings, cramming that many pixels into such a small space has got to have some effect on sensitivity.
    • Re: (Score:3, Interesting)

      by spun ( 1352 )

      I'd bet that you could use that many megapixels to seriously boost dynamic range by averaging several adjacent pixels into one.

      • Re: (Score:3, Insightful)

        by ceoyoyo ( 59147 )

        How would that help dynamic range?

        • by spun ( 1352 )

          You may find this article helpful: http://en.wikipedia.org/wiki/High_dynamic_range_imaging [wikipedia.org]

          • Re: (Score:3, Insightful)

            by ceoyoyo ( 59147 )

            I'm familiar with HDR, thanks. You'll note that the article you linked to doesn't contain the words "average" or "averaging."

            HDR requires that you have the same picture but with multiple, different exposures. You could potentially acquired this in one shot by making adjacent pixels more or less light sensitive (which has to be done in hardware), but averaging identical pixels isn't going to help. Nor does the HDR process involve averaging, even with multiple exposures.

            • Re: (Score:3, Insightful)

              by Beardydog ( 716221 )
              This is an actual question directed at you, not an argument, so bear with me... In a frame that captures the full available range in a scene (where a bright sky, for example will have detail), the dark areas will be underexposed and noisy, but not completely black (the way an overexposed sky will appear completely white). Couldn't four adjacent pixels simply be added together to produce an image with four times the range, and one quarter the resolution? So if, for example, three of the underexposed pixels a
              • by Peeteriz ( 821290 ) on Tuesday August 24, 2010 @03:43PM (#33361138)

                In principle, you get the exact same result or worse as with a cheaper sensor with less resolution where each pixel is simply 4 times larger and gets 4 times the light for the dark areas, and the bright parts will be maxed out anyway. And HDR usually means a much larger exposure difference than simply 4 times - say, 10 stop difference is a 2^10 ~= 1000 times more light for the dark parts.

              • by GlassHeart ( 579618 ) on Tuesday August 24, 2010 @04:59PM (#33362388) Journal

                It's not possible to get more range out of a single exposure, because the range is inherent in the capture based on how much light you choose to let in, and how sensitive your sensor is to that light. Dynamic range refers to the difference between the brightest and the darkest pixel the sensor can distinguish in that exposure. Beyond the bright end of the range, they all look the same white to the sensor. Beyond the dark end of the range, they also all look the same black.

                Here's how HDR works, oversimplified. We take a shot where we meter the bright part, so that it'll be properly exposed, deliberately sacrificing the dark parts. All dark pixels will be black in this exposure because we didn't let in enough light for the sensor to make out the difference. We then take another shot where we meter the dark part, sacrificing any somewhat bright parts. All bright pixels will be white because we let in too much light for the sensor to make out the difference. If we then combine the two images by throwing away the dark parts of the bright shot and the bright parts of the dark shot, we get an composited image that has more range than either image alone, i.e., HDR. Note that no averaging is involved.

                The alternate solution ceoyoyo is talking about requires a different kind of sensor. Imagine if you had two kinds of pixel sensors, one sensitive and the other insensitive. You'd alternate them on your sensor, perhaps in a checkerboard pattern, but basically pairing adjacent sensitive/insensitive pixels. Now, if your sensitive pixel registers too high a value, then it's probably blown out so use the value from the insensitive one (which is by definition not as bright). If the insensitive one registers too low a value, then it's probably too dark, so use the sensitive one (by definition not as dark). The crucial difference here is that you choose one over the other, and never average. If all you did was average, then the result is the same as using a single kind of pixel sensor with a sensitivity in the middle, and would not improve your dynamic range.

        • Re: (Score:3, Informative)

          Dynamic range is the distance between clipping and noise. The standard engineering definition assumes a SNR of 1:1 as the lower bound, but few photographers can tolerate that much noise and usually prefer 4:1 or 8:1. Random noise sources add in quadrature so that downsampling the pixel count by a factor of four increases the SNR by a factor of 2. A better way of thinking about it is this: the raw data from an image sensor has a noise power that increases linearly with spatial frequency (level of detail). Hi

      • So. Um.

        You'd restore the dynamic range capability of this sensor to the level of lower-resolution (larger pixel) CCDs by... combining pixels? So you're back to lower-resolution imaging.

        Are we being "Whooshed!" here? Or are you sincerely saying "Well, we have a 120 megapixel imager, but in order to get good dynamic range we have to process it back to 10 megapixels, just like your crappy cell phone camera."

        • by mangu ( 126918 )

          Or are you sincerely saying "Well, we have a 120 megapixel imager, but in order to get good dynamic range we have to process it back to 10 megapixels, just like your crappy cell phone camera."

          He's saying we have a 120 megapixel imager with great dynamic range (much, much better than any cell phone camera) and we can process it back to 10 megapixels to get awesome dynamic range.

          Considering how big this chip is, even at 120 megapixels there's much more light gathering surface per pixel than in a shitty phone

          • by vadim_t ( 324782 )

            I don't think that would really work.

            Think of a sensor like a bucket. Let's say it has a capacity for 256 drops. If your scene is lit in such a way that some buckets would overflow while others had just a few drops in them, that's when you have a dynamic range problem. You solve that by either having buckets that can hold more drops (which is why a DSLR has much more range than a cell phone), or by taking multiple exposures (HDR).

            But I don't think you can post-process a high resolution image into a lower on

            • by mangu ( 126918 )

              In strong light situations you either close your diaphragm letting less light in or do a shorter exposure. The problem is with low light situations.

              In your bucket analogy, there will be a situation when only one drop will fall, it will hit one of the four buckets. Averaging them you have 0.25 drops in each of the buckets, but looking at the four buckets separately you will have one bucket with a pixel and three with none, i.e. a noisy picture.

              • by vadim_t ( 324782 )

                No, it's just as noisy.

                You have 0.25 drops per 4 bucket group average, or 1 bucket with 1 drop and 3 with none, which averages to 0.25. Both images are really equivalent. You could scale down the second one, or scale up the first one and get the same result.

                And none of those options will result in a good picture.

                First reason is that if you're shooting in the darkness, there's got to be a source of light somewhere. If you shoot with a slow enough exposure to get some details in the shadows, it's nearly certa

        • Re: (Score:3, Interesting)

          by Jarik C-Bol ( 894741 )
          not *quite* you could still get say, 40 megapixels. A very basic HDR picture is the combination of 3 ranges, so if you took your base picture on one pixel, and the bracket range on the pixels to its left and right, (i'm generalizing here of course, the tech would not be THAT simple) the output is a 40MP picture with a dynamic range 3x what you would get with a standard 40MP camera. the fact that your saying "get good dynamic range" shows that you don't know much about the subject. Normal cameras simply *don
        • I think what he's saying is, "We have this incredibly sensitive sensor array, in good light it can do 13,280 x 9,184 pixels; or, if the light isn't that good we can cut it to 6640 x 4592 and combine pixels to get a more accurate image despite the lack of light." It's like anything else, when conditions aren't ideal you lose stuff. In this case if your light isn't good you can (in theory) go from phenomenally high resolution to merely really high resolution by combining pixels. You're only getting 2.5 tim

        • by spun ( 1352 )

          I'm saying, you could have your choice, more megapixels or more dynamic range, with the flip of a switch.

      • Re: (Score:3, Informative)

        by John Whitley ( 6067 )

        I'd bet that you could use that many megapixels to seriously boost dynamic range by averaging several adjacent pixels into one.

        Simply put: no. Software "averaging" may smooth out noise, but it will not add information that was not present in the first place. Missing dynamic range at the hardware is just not there to be recovered in software. In digital camera sensors, dynamic range is limited by saturation of the sensor's photosites [cambridgeincolour.com]. Once a photosite has collected enough photons, it registers maximum charge -- information about any further photons collected at that photosite during the exposure is lost. In fact, adding more ph

    • by wjh31 ( 1372867 )
      the pixel density is about that of a modern compact, back of the envelope calulation suggests about the same as 10Mpixels on a 1/2.5" sensor. So i guess low light performance would be comparable to modern compacts.
      • Not quite. It's the same PER PIXEL: if you crop a 10MP area out of this thing then it'll be roughly as noisy as a 10MP 1/1.8" sensor. (Did you use crop factor 1.6 or 1.3 in your math? This sensor is APS-H, 1.3x, not the commonly-used 1.6x APS-C.) But if you print at any given output size, the pixels from the higher-resolution sensor will be smaller and thus whatever noise is present at a pixel level will be less intrusive.

        For the math geeks: the real thing you should look at is the signal-to-noise ratio at

    • by bieber ( 998013 )
      Well, its likely application is in controlled commercial use, medical imaging and the like I would imagine, so any near-term use will almost certainly be under controlled lighting conditions.

      That being said, give 'em five years and they'll probably have it in a 1D churning out noiseless photos at ISO 51200 or some such nonsense. I used to think my 20D's marginally-usable ISO 3200 was pretty darn impressive, and now we've got insanely high res cameras doing 12800 and still looking decent. The way sensor
      • What's improving is in part processing technology. At some point you run into the brick wall of Planck's constant: the fluctuations in a counting experiment counting N photons can be no less than sqrt(N) due to shot noise.

    • by john83 ( 923470 )
      It's an interesting question. Pixels return an intensity value proportional to the mean intensity over their surface, so I'd imagine you could average groups of 2x2 or 3x3 (etc.) pixels to trade resolution for sensitivity. Alternatively, you could up the gain on each pixel, which as Greymist points out would reduce your signal to noise ratio.
      • by crgrace ( 220738 )

        Alternatively, you could up the gain on each pixel, which as Greymist points out would reduce your signal to noise ratio.

        Actually, increasing the pixel gain *improves* the SNR. This is because the noise limitation of these sensors is virtually always the readout electronics. Therefore, adding as much gain as possible before the signal hits the readout chain will lower the overall noise of the system. This is analogous to using a low noise amplifier (LNA) in front of an RF receiver.

        There are, of course, limitations. Pixels generally have a voltage gain of less than one (that is, the gain from the photodiode to the pixel o

    • by crgrace ( 220738 )

      I'm just curious what this would be like in low light settings, cramming that many pixels into such a small space has got to have some effect on sensitivity.

      Pixel size, per se, has no impact on the light sensitivity of the pixel. That depends only on the read noise of the sensor and its associated electronics. A small pixel, however, does limit the depth of the potential well, so it would have more of an impact on in bright settings. What I'm saying is it would reduce the dynamic range of the sensor, but not have any direct effect on its performance in low light.

      To get back the bright performance, pixels can be ganged together to make superpixels, but, of co

    • You can crank up the sensitivity all the way and then run a low pass filter to get rid of all the noise ;)

    • Re: (Score:3, Informative)

      Actually, the "sensitivity" (more specifically, e-/m^2) is generally the same across a huge variety of pixel sizes, thanks to microlenses. What is not usually constant is read noise (AKA "high ISO noise", sometimes also referred to as "sensitivity"), because although it does naturally shrink a little bit as the pixel size is reduced, it's not always in exact linear proportion with pixel diameter, and therefore the generalization that smaller pixels tend to have slightly more noise in low light.

  • by BWJones ( 18351 ) * on Tuesday August 24, 2010 @01:02PM (#33358462) Homepage Journal

    Canon had better come up with some sharper lenses with a sensor like this. I shoot shoot [utah.edu] with APS-H sensors on the Canon 1D and many of the lenses that Canon, Nikon and Sigma among others make are not nearly sharp enough to deal with many more pixels than are on say... the Canon 1Ds. Zeiss makes some sharp glass, but with the pixel density Canon is talking about with this new sensor, I'd worry about noise in low light conditions like those on my last embed [utah.edu] on the USS Toledo (world's first embed in a strategic nuclear submarine). Any sort of low light, high ISO images will be truly challenging environments for such small pixel imaging sites.

    It might be a great technology demonstrator or even a specific use CMOS chip for longer exposures, but I doubt it will have any applications in consumer or professional cameras unless some additional technology (or physics) comes into play.

    Also, one would have to come up with some new strategies for moving all of that data around. As it is, on the latest Canon 1D Mk IV, they are pushing 16.1 MP around at about 10 fps. With this new sensor, just the readout would prevent this sensor from being used in any but the most specialized of applications.

    • Re: (Score:2, Informative)

      by jvillain ( 546827 )

      Cannon makes some awesome lenses. You just can't buy them in the toy department at Best Buy. The problem with high density sensors is that the denser they get the higher the noise level becomes. I think that is one of the reasons that Cannon isn't tripping over them selves to ramp up the Megapixal count that fast.

      • by localman57 ( 1340533 ) on Tuesday August 24, 2010 @01:17PM (#33358742)
        Plus, sooner or later the general public is going to realize that megapixels aren't everything. A the output of a 6 megapixel Nikon D40 will amaze your non-photographer friends, while the 14Megapixel Samsung compact you just bought at walmart will most definately not.
        • by lwsimon ( 724555 )
          I agree. I use a D70 for my hobby photography, and I would feel comfortable with it even today in many professional settings. That camera is 7 years old now.
        • Yes, pair that up with a 'nifty 50mm' and you've just blown away nearly every point and shoot out there. If you don't mind manual focusing (the D40 doesn't have it's own focus motor) then you can get the 50mm lens for $200. The AF one iirc, is right around $400 or less.

      • Re: (Score:3, Interesting)

        by BWJones ( 18351 ) *

        Canon does makes some great glass and I shoot exclusively with Canon glass. However, Nikon, Zeiss and Leica among others also produce some pretty sweet lenses. Eventually, everybody is going to have to deal with issues related to the optics being able to actually resolve the imaging sites. At some point (and we are close), the glass will not be able to resolve anything more than the sensor can read out and you'd have wasted pixels. Kinda like the issue with Apple's Retina Display on the iPhone 4 that I

        • Random question: My father shoots Canon, and has gotten sort of frustrated with the ADHD problem of the autofocus. Using two different lenses (70-200/2.8, 100-400) and two different bodies (350D, 500D), he's noticed that the AF is easily distracted by foreground clutter, and will also inexplicably refuse to confirm an AF lock (and thus shoot) in some situations you'd think are easy, like a bird on the end of a twig with a background distant enough to be a blur. Have you experienced anything like this? (This

          • I've not experienced this but some of the bigger telephoto lenses have the option to ignore foreground objects. If you focus on something, it won't try to refocus on something much closer. If I was your dad, I'd turn off autofocus to get a good focus and then turn it back on.

          • ), he's noticed that the AF is easily distracted by foreground clutter, and will also inexplicably refuse to confirm an AF lock (and thus shoot) in some situations you'd think are easy, like a bird on the end of a twig with a background distant enough to be a blur.

            It's a mix of two issues: first is focus point size. The second issue is that AF algorithms generally select the closest subject to the AF sensor. It sounds like you need a body that has spot AF (basically, it's a feature that cuts an AF point dow

          • I call BS, he must be doing something else wrong; the camera is dirty or not configured right, the lenses aren't genuine, he's just too close to the subject, etc. With my 400D and an array of cheap (but genuine Canon) lenses, center point one-shot AF on a centered subject is super fast and accurate in all but the lowest of lighting. Hell, I almost always leave my camera in AI-focus (notoriously picky) and it rarely lets me down.

          • My first solution to random autofocus issues is to shoot every thing twice. Shoot once then move the camera just a fraction and shoot again. Bracketing can help with that. Storage is cheap so shoot every thing twice if you can.

            The second solution is that most Cannon cameras allow you to change the number and positions of the sensor points. The fully automatic modes usually wont let you but the semi auto ones will. The standard layout is roughly 10 dots covering the middle of the screen. But you can probably

        • Re: (Score:3, Interesting)

          by Amouth ( 879122 )

          i want this

          http://graphics.stanford.edu/papers/lfcamera/ [stanford.edu] /. posted it here ~5 years ago.. still waiting

          http://slashdot.org/articles/05/11/21/2316216.shtml?tid=126&tid=152 [slashdot.org]

        • Instead of worrying about "wasted pixels", we should be worrying about "wasted glass". A simple prime lens has about the same resolution and cost today as it did 20 years ago, whereas image sensor resolution and cost have advanced by many orders of magnitude in the same amount of time. Therefore we should be more concerned about that part of the system which is expensive and difficult to improve; not the part that is getting cheaper and better.

      • You spell it Cannon and you're telling someone who shoots with a 1D and has likely used Zeiss lenses that they can't buy awesome Canon lenses in the toy department at Best Buy.

      • Canon does make some awesome lenses, but even some of their L-lenses look somewhat lacking when used on their high-resolution sensors.

        My father has a 70-200 f/2.8L. It actually shows pretty low contrast and a "hazy" look until you stop it down to f/4 or more, especially at the long end. The new 70-200 mk2 is much better.

      • Considering that GP is discussing the EOS xD line, Zeiss lenses, and so on, I really doubt he's shopping for lenses at Best Buy.

        However despite what you're saying, there are some hidden gems in Canon's cheaper lens offerings. I bought an EF-S 18-55mm f/3.5-5.6 IS may exhibit a bit of CA (easily corrected in post) as a throwaway lens (one just to use for an event and then replace it with better glass later) but I was surprised to find that is is wonderfully sharp and there are folks claim it compares favorab

    • I'm sure the folks over at SDSS [sdss.org] (Sloan Digital Sky Survey) would be happy to make use of a sensor like this.
    • Re: (Score:2, Informative)

      by Anonymous Coward

      The problem with modern digital cameras is that they are diffraction limited, http://en.wikipedia.org/wiki/Diffraction-limited_system [wikipedia.org], by the laws of physics, or very nearly so, given current day lens technology. There is no way you will get a higher actual resolution without going to lenses, which are significantly larger in diameter than what we are used to in dSLRs. So adding more pixels in the area of the sensor of the latest Canon 1D models is completely pointless, which is why we haven't seen an updat

    • by delta407 ( 518868 ) <slashdot&lerfjhax,com> on Tuesday August 24, 2010 @01:36PM (#33359042) Homepage

      A more substantial problem is that diffraction limits the effective resolution of an optical system to well above the size of each of these pixels. This is a problem with current sensors at narrow apertures; lenses exhibit a measurable loss of sharpness, typically f/11 and up, because the airy disks expand as the aperture contracts. With hugely dense sensors like this, though... plugging some numbers into a website that explains the whole situation [cambridgeincolour.com] suggests that you'd need to shoot with apertures than f/1.8 to get circles of confusion smaller than the size of a single pixel.

      That's right--even "fast" f/2.8 lenses are limited by physics to never being able to project detail onto individual pixels. You could potentially add a deconvolution stage in software to recover additional sharpness, but not in hardware.

      Another thing. Do the math: the pixels are 2.1 micrometers square. Compare to trichromatic human vision, which detects red light peaking at 564 nanometers, 0.564 micrometers. The size of a pixel is within a factor of four of the wavelengths they measure. Staggering.

      Glass isn't the problem. We need new laws of nature, since we're near the edges of the ones we have now.

      • Re: (Score:3, Informative)

        That's right--even "fast" f/2.8 lenses are limited by physics to never being able to project detail onto individual pixels.

        That is incorrect. Parity between the airy disk and pixel diameter is not the point at which additional detail becomes impossible -- that is only one point on the curve of diminishing returns. In other words, it is the difference between the "diffraction limited" spatial frequency and the "diffraction cutoff" spatial frequency. It is only the latter that denotes the impossibility of further resolution from decreased pixel size.

        The easiest way to understand this is to look at MTF. When diffraction causes the

    • by Ichijo ( 607641 )

      I shoot shoot with APS-H sensors on the Canon 1D and many of the lenses that Canon, Nikon and Sigma among others make are not nearly sharp enough to deal with many more pixels than are on say... the Canon 1Ds.

      If you always shoot wide open, I can see why you would say that. But if I stop down, even the kit lens on my 10MP Canon XTi outresolves the sensor.

    • It's true that some lenses are already into diminishing returns, particularly fast or wide lenses. But many other lenses are not even oversampled with 2 micron pixel sizes, including many macro and sharp primes. Take a look at this example of a $400 macro lens using 1.2 micron (simulated) pixel size:

      http://forums.dpreview.com/forums/read.asp?forum=1019&message=29826265 [dpreview.com]

  • Still Cool (Score:5, Interesting)

    by lymond01 ( 314120 ) on Tuesday August 24, 2010 @01:06PM (#33358564)

    45 MP photo to zoom into:

    Dubai [gigapan.org]

  • I have a feeling that GPS and software integration to create auto-3d model photos are going more important than the resolution.
  • 150 megapixel (Score:2, Interesting)

    by Anonymous Coward

    Good film under ideal conditions can handle 2500 line pairs per inch. The mathematical purist who was more obsessed with numbers than practical applications would want a sensor that can handle 10,000 dots per inch for copying film, and an image sensor of 5,000 dots per inch for shooting, with optics, electronics, and other hardware (and software!) to match.

    5,000 dpi on a standard 35mm 3:2 aspect ratio means 37.5 megapixels.

    For what it's worth, 10,000 dpi would be 4x that amount, or 150 megapixels.

  • Size doesn't matter (Score:3, Interesting)

    by $RANDOMLUSER ( 804576 ) on Tuesday August 24, 2010 @01:19PM (#33358762)
    I have to go with Ken Rockwell on this one: Megapixels don't matter [kenrockwell.com]. Unless you're blowing your 35mm shots up to poster size, pixel density over about 8 megapixels is useless overkill.
    • I'd love to be able to take many of my old family and vacation photos and take a small piece and blow it up to 4x6 or even 8x12 size without noticeable-to-the-casual-observer loss of detail.

      Imagine taking crowd-scene photos of a sporting event then when your friend said he was there and points his face out in the crowd, you can print him out an 8x12 of him and his friends.

      • Imagine taking crowd-scene photos of a sporting event then when your friend said he was there and points his face out in the crowd, you can print him out an 8x12 of him and his friends.

        Even if your sensor could (theoretically) do that, your (hand holdable) lens couldn't.

        • by Ichijo ( 607641 )

          Even if your sensor could (theoretically) do that, your (hand holdable) lens couldn't.

          With or without optical image stabilization?

      • The loss of detail in those photos is because of optical defects, camera shake, and focus errors, not a limitation of the recording medium.

    • by Bryansix ( 761547 ) on Tuesday August 24, 2010 @01:54PM (#33359328) Homepage
      That article is OLD and he is not saying that Megapixels don't matter. He is saying that to see a difference you need to quadruple the megapixels and also that other things matter a lot like light sensitivity, pixel to space ratio, ISO performance and the like. He then goes on to say you would need a 25 megapixel camera to meet 35mm uality and that such a camera is not feasable. Well I have to give him a Bill Gates because it is moronic to say anything is not technically feasable because in 10 years you look like a fool.

      To get to the POINT, I own a Canon 5D Mark II which is a 21 Megapixel sensor. I have shot plenty of 35mm film and I can tell you without a shadow of a doubt that this sensor blows 35mm film out of the fucking water! You can see the images I take here. http://shezphoto.zenfolio.com/ [zenfolio.com] and www.shezphoto.com Those are not even full res (although you can buy some of them full resolution). I have blown up the images to 24" x 36" and all the detail remains intact. I'm sure I could go larger but I just haven't.
      • Re: (Score:3, Insightful)

        by thegarbz ( 1787294 )
        Unforunately this sensor also blows many of the lenses Canon makes out of the water. Wake me up when glass gets perfect and the laws of nature w.r.t difraction are broken. Until the ... 120mpx *YAWN*.
    • It all depends on how you intend to display the pictures. For most consumer applications the megapixel battle is over. If the pictures are only going to be seen on a screen or printed to something small like 4x5s, any modern camera will suffice. With an 8MP camera I can get acceptable prints up to 8x10 with just the slightest pixelation visible under close scrutiny. I recently had to shoot a picture for a book cover that I wanted to wrap from both extremes front to back across 15.5". This results in a toler

    • by danpbrowning ( 149453 ) on Tuesday August 24, 2010 @03:58PM (#33361410)

      Ken Rockwell is to photography what a goatse troll is to Slashdot. (In fact, if you read his alien abduction pages, you'll see some similarity with goatse).

      It's like saying "Computer specs don't matter. Unless you're folding proteins, a 486 is just as good as i5." While it's true that sharpness and resolution are not the most important factors in a photograph, it's misleading as their benefits do in fact contribute to most styles of photography, just as a faster computer can contribute to a better experience for most computing needs.

      For example, most people feel that for an 8x10, there is no benefit to pixel counts above 6 MP, but in fact it takes a 24 MP before all the possible gains are realized, most importantly counteracting the loss in contrast from the anti-alias filter. (Many more MP would be required to hit full color resolution at Nyquist, but few natural images benefit from that, despite what the Foveon advocates claim.)

  • Hook this sensor up to a round lens and capture full 360 degree video all the time, and use software to un-distort the image so you have a fixed tiny camera, that you can pan and zoom all the way around with.

    • Re: (Score:3, Informative)

      by ZenShadow ( 101870 )

      Been there, done that, believe it was patented by iPIX. Not sure who holds it now since they're gone AFAIK...

      Seriously, they used this to do those 3D virtual tours.

  • Uses (Score:5, Insightful)

    by MBGMorden ( 803437 ) on Tuesday August 24, 2010 @01:21PM (#33358798)

    I'm sure the professionals would love such a critter, but as a person who likes to just take personal stills, to me the megapixel war is over. At this point in time I have a hard time getting excited over anything more than 10-12MP. They print just fine to photo sizes that I'd be interested in, and the truth is that MOST of my photos I keep digitally anyways where anything that has more pixels than my monitor is a waste (particularly with the ballooning size of these photos).

    I'm far more interested in seeing higher quality photos within our current megapixel options than seeing that particular number go up and up - afterall, there's a HUGE difference between your typical DSLR at 10MP and a $100 point and shoot at 10MP. That metric doesn't define the quality of the image.

    • Yes--I am troubled that some industries (not just cameras, and consumers are just as guilty) are forgetting the old "quality over quantity" thing.
  • at that resolution the pixel sensors are closer together than the wavelengths of visible light, and each photon will be triggering multiple pixels, thus reducing the apparent resolution.

  • Hmm, with that resolution we could do the science fiction standard nonsense:

    "Select quadrant in top right corner. Enhance.
    Select the reflection on the subjects glasses. Zoom 50X and enhance.
    See the face of the murderer.."

    Remember Blade Runner?
    http://criticalcommons.org/Members/ironman28/clips/bladeRunner3DphotoH264.mov/view [criticalcommons.org]

    • by PPH ( 736903 )
      Why go back that far? Every CSI show has a bit where the IT person takes a security cam photo (in real life, a bank robber's face occupies 6 pixels) and blows it up to read a license plate number across the street.
      • by mauriceh ( 3721 )

        The Blade Runner was both the oldest example I could remember, AND it was the most far fetched version.

    • by grumbel ( 592662 )

      Hmm, with that resolution we could do the science fiction standard nonsense:

      The fascinating part of that scene is that it actually is extremely close to reality. We already have tons of gigapixel images floating around on the net and in terms of resolution they seem to be quite up on par with the Bladerunner image (i.e. 10 gigapixel or so). The Bladerunner image gets a bit further in that it is not only 2D, but actually a bit 3D, but even that is possible with lightfield photography [stanford.edu]. Now today those gigapixel images are produced by cameras mounted on robots, but when you look at Sw [sony.co.uk]

      • by grumbel ( 592662 )

        And an additional throw in: Does anybody know of gigapixel images that capture mundane stuff? Cars, people, etc. instead of those large scale panoramas (recreation of the Bladerunner picture would be perfect of course)? The closed gigapixel images I could find that are not city panoramas are www.gigamacro.com/ [gigamacro.com], i.e. extreme closeups of money, insects and other stuff.

  • The real application for ultra-high resolution is surveillance cams. Something interesting might happen somewhere in a wide field of view, and when it does, detail is useful.

    • by joh ( 27088 )

      The real application for ultra-high resolution is surveillance cams. Something interesting might happen somewhere in a wide field of view, and when it does, detail is useful.

      You may have a point here. With something like 120 MP and a wide lens you can cover a large area with one camera and have access to details everywhere without having to move the thing around. On the other hand you get absolutely *huge* amounts of data...

  • It matters for me... (Score:3, Interesting)

    by PhantomHarlock ( 189617 ) on Tuesday August 24, 2010 @01:47PM (#33359216)

    From a professional photographer's standpoint, I DO appreciate more resolution, because I do make things that end up on posters and billboards. Also, the primary advantage in most cases is the ability to crop and still have a decent resolution image.

    As another poster mentioned, the main problem at this point is with the glass. Sharp glass that remains the same size to accommodate a denser, not larger sensor is a tough proposition, and the new frontier of technology. Things like liquid lenses may overcome this in the future, who knows.

    Right now, with my 21MP 5D Mk. II, I can use modern Canon "L" zoom lenses too my heart's content and have an image that is sharp from corner to corner, especially now that you can easily correct for chromatic aberration in RAW processing software. (to give you an idea of how far this has come, when I was doing 3D animation 10 years ago, we would commonly add back in chromatic aberration to 3D generated images to give them a sense of realism.)

    For the sort of resolution discussed here, if you wanted relatively sharp pixels at 1:1 (spatial, or perceived resolution, actual sharpness delineation from one pixel to the next) you would probably want to stick with prime (non-zoom) lenses with fewer glass elements, and it would probably OK.

    Other posters are correct in that this kind of resolution is currently unnecessary for consumer and casual use. But for me, large blow ups and two-page spreads are a frequent thing, and I apprecicate all the pixels I've got. :)

  • Did a bit of math here and at 36-bit color a raw image would be a bit more than 535mb.

    I don't think the technology is available yet to process an image that large into a jpeg or copy a raw image to a storage device quickly enough to use this in most camera applications - and definitely not in your point and shoot ;-)

    • Currently used sensors don't capture RGB for every pixel***. Each pixel is one color, and then processing is done to interpolate the other colors from adjacent pixels of those colors. Go look up "bayer filter" for more details.

      So really, rather than storing 36 bits per pixel, you'd only be dealing with 12 bits. Actually, most likely more than that. Current Canon SLRs capture 14 bit per pixel. But the point is you don't need 36 bits. Lets just say 16 bits, which gives you 240 MB for a 120 MP sensor.

      On top of

  • The future is now (Score:3, Interesting)

    by freelunch ( 258011 ) on Tuesday August 24, 2010 @01:51PM (#33359278)

    boasts a ridiculous resolution of 13,280 x 9,184 pixels

    My 6x7 cm film images are already 11,023 x 9,448 when scanned at 4000 dpi.
    And there are no artifacts from Bayer interpolation.

    30x36" prints, and even larger, are spectacular. But you need good lenses, a good tripod, and good technique; otherwise you won't resolve the detail.

    And with 20x30" prints only $9 at Costco (on profiled printers), I *am* enlarging my prints to poster size, thankyouverymuch.

    I look forward to digital catching up.

  • Light Field Camera (Score:3, Insightful)

    by cowtamer ( 311087 ) on Tuesday August 24, 2010 @01:52PM (#33359308) Journal

    I'm sure it'll be perfect for this application:

    http://en.wikipedia.org/wiki/Plenoptic_camera [wikipedia.org] (a type of camera that can let you re-focus (and to a certain extent re-position) images after taking the shot. The problem is that it requires a LOT of resolution to produce acceptable images).

    http://graphics.stanford.edu/papers/lfcamera/ [stanford.edu]

    http://www.youtube.com/watch?v=9H7yx31yslM&NR=1 [youtube.com] (demo video from paper above)

    http://www.youtube.com/watch?v=o3cyntPC2NU [youtube.com]

    Here's one built with a 250 MP Flatbet scanner:

    http://www.youtube.com/watch?v=4O5fPoacF3Q&feature=related [youtube.com]

  • About 21 megapixels on a full frame SLR is already pushing the resolution limit of reasonably priced lenses (IE, L series glass). You might get a bit more than that, say 30 megapixels. Beyond that you're exceeding the Dawe's limit of the optics, and you're just not going to get any more detail this way than by just interpolating the digital 30 megapixel image.

  • I used to use an 8"x10" camera, with 25 ASA film.
    As much as I really like digital, and I do, there is simply no way an 8x10" ('contact')print from a mere 120 megapixel file is going to be even close.
    I'll get stoked when we're talking 100+ gigapixels.

    • Why are you comparing large-format film to 35mm digital? You should be comparing like to like, and looking at large-format scanning backs.

  • Dynamic Range, (Score:5, Insightful)

    by 140Mandak262Jamuna ( 970587 ) on Tuesday August 24, 2010 @02:16PM (#33359724) Journal
    I wish they would spend more time on improving the dynamic range than to just play the megapixel count wars.

    Instead of total pixel count, get one set of pixels to shoot at the equivalent of 100 speed, and the adjacent set of pixels to shoot at 200 speed etc etc. Then process the pixels to get details in dark regions and to scale the brightness. I would like a dynamic range (brightness ratio of the brightest to dimmest pixel) to be a million or more, not the present 1000. Human eye has a dynamic range of about 1 million (only in the fovea, not in the peripheral vision).

  • but serious camera users need at least 3.2 gigpixels [lsst.org] to fully exploit a decent lens [lsst.org].

    I admit, portability suffers a bit at this point, but aren't your pictures worth it?

    • by Shag ( 3737 )

      The only problem with that setup is that it takes pictures of what it wants to. ;)

      I'll have to stick with the next best thing [sinica.edu.tw], which I at least get to point at things. :)

  • I hope they package this behind a nice 3mm plastic fixed focus lens!
  • In other news, Ford has set a new land speed record by attaching a Mustang to a solid-fuel rocket from the space shuttle. Funeral services for the driver/pilot will be held next week.

    A sensor beyond 20 MP is of limited use - it out-resolves nearly all commercially available lenses. This is when professionals move up to medium format cameras and lenses to achieve a larger image area. Diffraction and noise are just of the few problems that have not been resolved with small dense sensors.

  • Think the other way (Score:2, Interesting)

    by RickyG ( 1009867 )
    If you have the technology to make a 120 megapixel camera, reverse your thinking. Can you use that technology to decrease the size of your current product, so that a standard 8 to 10 megapixel camera is so small and compact, that it meets the needs of the growing phone/ipod/iphone/ipad industry?

I owe the public nothing. -- J.P. Morgan

Working...