Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

A Single Pixel Camera 190

BuzzSkyline writes "Scientists at Rice University have developed a one pixel camera. Instead of recording an image point by point, it records the brightness of the light reflected from an array of movable micromirrors. Each configuration of the mirrors encodes some information about the scene, which the pixel collects as a single number. The camera produces a picture by psuedorandomly switching the mirrors and measuring the result several thousand times. Unlike megapixel cameras that record millions of pieces of data and then compress the information to keep file sizes down, the single pixel camera compresses the data first and records only the compact information. The experimental version is slow and the image quality is rough, but the technique may lead to single-pixel cameras that use detectors that can collect images outside the visible range, multi-pixel cameras that get by with much smaller imaging arrays, or possibly even megapixel cameras that provide gigapixel resolution. The researchers described their research on October 11 at the Optical Society of America's Frontiers in Optics meeting in Rochester, NY."
This discussion has been archived. No new comments can be posted.

A Single Pixel Camera

Comments Filter:
  • I don't get it... (Score:3, Interesting)

    by red.alkali ( 1000125 ) <red.alka@li> on Friday October 20, 2006 @01:49AM (#16513087) Homepage
    It'll make current cameras, with simpler technology (less micromirror arrays and whatnot) cheaper? How? This stuff sounds expensiver.
    • by Anonymous Coward on Friday October 20, 2006 @01:58AM (#16513135)
      Sure it's expensiverest at the moment. But with economisationalisation from upscalifying the process you could see it cheapifying quickly.
      • by Anonymous Coward
        Cat got your tongue? (something important seems to be missing from your comment ... like the body or the subject!)
    • by SuperKendall ( 25149 ) on Friday October 20, 2006 @02:29AM (#16513293)
      Is it really cheaper to manufacture micromirror arrays that CCD or CMOS sensors?

      Also, what degree of photon loss do you have from the arrays? No mirror is perfect...
      • by andy_t_roo ( 912592 ) on Friday October 20, 2006 @02:42AM (#16513351)
        within a certain wavelength range (down to where actual atomic structures break up the smoothness), a perfectly flat material with no resistance has perfect reflection (that's why the silver back on a glass mirror is so reflective, is very flat and conductive
        • Re: (Score:2, Funny)

          by fuzz6y ( 240555 )
          a perfectly flat material with no resistance
          Hey, next time you're in Physics Experiment Land, grab me 2 of those and a spherical cow.
        • by tylernt ( 581794 )

          that's why the silver back on a glass mirror is so reflective, is very flat and conductive

          Actually, a glass mirror is a poor example. Ever look at a reflection within a reflection etc in a glass mirror? Eventually goes dark because the light is passing through the imperfectly clear glass and then back through it again on each reflection.

          On the other hand, a reflector telescope with a thin (few molecules) layer of aluminum on *top* of the mirror has some crazy 99.9% reflectivity (sorry, too lazy to google th

      • by Anonymous Coward on Friday October 20, 2006 @03:56AM (#16513655)
        Is it really cheaper to manufacture micromirror arrays that CCD or CMOS sensors?

        Not likely. And it certainly doesn't sound mechanically robust to have moving parts replace a purely electronic chip. Cameras need to be rugged.

        Also, what degree of photon loss do you have from the arrays? No mirror is perfect...

        Imperfection in the reflectivity is probably secondary to diffraction, which will be a big problem for these small mirrors - and they would have to shrink even further for reasonable (multi-Mpixel) image resolutions. Diffraction is the biggest limiting factor for contrast in DMD projectors.

        There are other problems with this design. First off, it is a time-sequential acquisition. The reconstruction algorithm assumes that all measurements are taken from the exact same scene. God knows what garbage it produces if you have moving objects or camera shake.

        I guess their biggest motivation is to do the image sensing directly in compression space. Unfortunately, their compression space is vastly inferior to the compression space of, say JPEG. You see, JPEG is very cleverly designed in that it doesn't actually zero out certain frequencies directly - it just quantizes higher frequencies more agressively than lower ones, and that results in data that compresses better with a lossless compression algorithm (Huffman). By contrast, this compressive camera thing essentially directly zeroes out certain frequencies that have low amplitude. Not a very good idea perceptually.
        • Re: (Score:3, Interesting)

          by Anonymous Coward
          I'm not sure I agree with you.
          The problem with CCDs is you need to clock the values off the capacitors. Either you use a machanical shutter to stop smearing while you do this, or clock it into masked areas, which means you either need to accept a 50% loss of area, or have micro-lenses, etc.

          With the single pixel idea you shouldn't have too many problems if you can clock the system fast enough.
          It also may be possible to create an array of mirrors with better behavioural uniformity than an array of detectors.

          D
        • Re: (Score:3, Informative)

          No camera system is perfect... but I think you might be selling this one short a little too soon.

          The idea behind the average consumer camera is to gather photons from a large area in a reasonably short amount of time. Usually we do this with film or with a CCD or CMOS array. However, film is going out of vogue, and CCDs and CMOS arrays can have dead spots. From a scientific standpoint, arrays are problematic for this very reason... plus, who has time to calibrate several thousand detector elements per camer
  • 101 (Score:5, Funny)

    by Timesprout ( 579035 ) on Friday October 20, 2006 @01:53AM (#16513113)
    This is me with Natalie Portman at a Star Wars convention (I'm the second 1).
    • Re:101 (Score:5, Funny)

      by Anonymous Coward on Friday October 20, 2006 @02:16AM (#16513211)
      Sorry but due to the lossey process it is impossible to tell if hot grits were present,
      Please take another photo and maybe the randomness of the process will enlighten us.
    • Re:101 (Score:5, Funny)

      by TempeTerra ( 83076 ) on Friday October 20, 2006 @08:35AM (#16514859)
      Nice try, doofus, but that's clearly photoshopped.
    • You get used to it. I-I don't even see the code. All I see is Blonde, Brunette, Redhead....
  • Applications (Score:3, Interesting)

    by zaydana ( 729943 ) on Friday October 20, 2006 @01:53AM (#16513117)
    This could have some awesome applications, especially on space missions. Imagine the next generation of mars probes and the resolution of the pictures taken if a camera near the size of current ones could have thousands of times the resolution. And of course, you also need to think about spy satellites. But perhaps the coolest application would be on space telescopes...
    • Re:Applications (Score:5, Insightful)

      by DerekLyons ( 302214 ) <fairwater@@@gmail...com> on Friday October 20, 2006 @02:01AM (#16513153) Homepage
      This could have some awesome applications, especially on space missions. Imagine the next generation of mars probes and the resolution of the pictures taken if a camera near the size of current ones could have thousands of times the resolution.

      This is unlikely for several reasons 1) resolution is far more limited by optical aperture than by the CCD array, 2) the system reads its images over a longish span of time - not good when your target is passing rapidly beneath you, and 3) the system requires considerable postprocessing - this either means you have to slow down the rate at which you take pictures, or eat scarce communications bandwidth.
       
       
      And of course, you also need to think about spy satellites. But perhaps the coolest application would be on space telescopes...

      The same objections apply to both applications.
    • by goombah99 ( 560566 ) on Friday October 20, 2006 @02:46AM (#16513367)
      Check this out [osti.gov] In 1999 scientists at Los alamos national lab did essentially the same thing. Except they went one better---they also added in Phase detection by heterodyning the receiver.

      Instead of using micro mirrors, the Los alamos team used an LCD which were more mature at the time. And Instead of using random modulation they used a progression of zenike polynomials and thus achieved much more control over the data compression.

    • Re:Applications (Score:5, Informative)

      by tkittel ( 619119 ) on Friday October 20, 2006 @02:47AM (#16513369)
      Actually a less fancy version of this technique was already used on mars pathfinder where several images were taken of the same objective and then combined to obtain better resolution.

      "Superresolution image processing is a computational method for improving image resolution by a factor of n[1/2] by combining n independent images. This technique was used on Pathfinder to obtain better resolved images of Martian surface features."

      Taken from the abstract of this article [inist.fr]:
      • by ceoyoyo ( 59147 )
        Superresolution is finicky and has awful noise characteristics. It's only really useful if there's some fundamental limitation that you're trying to overcome. It's biggest use is in light microscopy, where it can let you resolve things that are a bit smaller than the wavelength of your light. Or on space probes when you have extra time on your hands but no way to put a better lens on your rover.
    • Re:Applications (Score:5, Interesting)

      by eonlabs ( 921625 ) on Friday October 20, 2006 @02:49AM (#16513379) Journal
      It makes more sense for small applications, I would think. A 39MPix CCD is several inches in each dimension. A single pixel would easily fit under a fingernail without anyone noticing. Depending on the mirror arrangement, you could probably have a lens-less camera that is not much bigger than a few grains of sand.
    • Re: (Score:3, Informative)

      by Intron ( 870560 )
      Lots of the satellites like GOES, etc. use a single sensor and a spinning mirror. So the horizontal is scanned by the mirror, and the vertical is scanned by the satellite motion. That gives you raster data with a single "pixel" sensor and it is already serialized in the correct order for transmission to the ground.
  • by macadamia_harold ( 947445 ) on Friday October 20, 2006 @01:55AM (#16513127) Homepage
    Scientists at Rice University have developed a one pixel camera.

    The camera's one pixel, but when you print it out full size, you get a mega pixel.
  • photo album (Score:5, Funny)

    by chowdy ( 992689 ) on Friday October 20, 2006 @01:58AM (#16513137)
    . here's me at the grand canyon . oh man, here's where i got drunk off of my ass . here's me apologizing for this terrible joke
  • by tonigonenstein ( 912347 ) on Friday October 20, 2006 @01:59AM (#16513143)
    One pixel should be enough for anybody.
  • by Harmonious Botch ( 921977 ) * on Friday October 20, 2006 @01:59AM (#16513145) Homepage Journal
    I'm trying to take apicture one pixel at a time!
  • by flyingfsck ( 986395 ) on Friday October 20, 2006 @02:03AM (#16513161)
    Early space cameras were single pixel and scanned their surroundings by their rotation.

    Early fax machines worked the same way, but spun the paper around while the single photocell moved linearly left to right.

    Hmmfff - Guess I'm giving my age away...
    • by Dunbal ( 464142 )
      Early CT scanners worked essentially the same way as well, with one sensor that was spun around.

      IANAE (an engineer) but I don't know if moving parts in a camera that's going to jiggle around anyway is such a good idea. At certain resolutions would you end up with the sum of the human factor's jiggles - plus the movement of the innards - distorting the picture even worse than today's cameras?
    • by mrjb ( 547783 ) on Friday October 20, 2006 @03:55AM (#16513645)
      Early fax machines worked the same way, but spun the paper around while the single photocell moved linearly left to right.

      Hmmfff - Guess I'm giving my age away...

      You should, in fact, call the Guinness Book of Records, as you must be the oldest person in the world. Fax machines of some sort or another have existed since the mid-late 19th century. [wikipedia.org]
    • Mars Viking lander (Score:2, Interesting)

      by cellmaker ( 621214 )
      Check out Mars Viking lander. It used a "nodding" mirror with a 12 pixel array for its camera. This link gives a very detailed discussion on the Viking camera. http://dragon.larc.nasa.gov/viscom/first_pictures. html [nasa.gov] A rather large slide show document gives a very high level overview of different imaging devices used in space probes. http://www.mps.mpg.de/solar-system-school/lectures /space_instrumentation/11.ppt#281,1,Slide1 [mps.mpg.de]
    • The first IR astronomy imagers worked like that as well. With a single pixel. In fact, just last year I was in a class where we made a radio map of the sun using a single pixel (dish) radio telescope.

      The sounds like just a different way to do the same thing people have been doing for 30+ years..

    • by Detritus ( 11846 )
      Wire photo machines, used to distribute photographs to newspapers, used a similar system. Sometimes you can see them in old movies, when the police send a suspect's fingerprints to the FBI.
    • Re: (Score:3, Informative)

      Early space cameras were single pixel and scanned their surroundings by their rotation.

      Low-orbit weather satellites [noaa.gov] work this way too. They have a rotating mirror [noaa.gov] that scans the image on to a single-pixel sensor, then the spacecraft's motion provides the Y dimension. These things take really cool pictures. I use a modified Radio Shack scanner and my computer (with its sound card) to receive them.

      I've toyed with mechanical scanning for a couple of applications: making a high speed camera, and turning a

  • by syousef ( 465911 ) on Friday October 20, 2006 @02:04AM (#16513167) Journal
    If you record only (lossy) compressed data, that will limit your image quality.
    If you record things "pseudo-randomly", it'll be harder to get a predictable result
    If you record a billion pixels instead of a million, you'll need to store them.
    If you reduce the number of pixels, you reduce your redundancy.

    It's still an interesting idea and probably has some specialist applications that will be very practical. But don't look for this in your Nikon or Canon camera in the next 10 years. Not sure what they are but if it can be made small enough I imagine a gigapixel camera on a space probe or better yet a space telescope (which can have more time to collect data) might be one. Of course it could also end up useless. That doesn't mean the technology shouldn't be explored. You never know what's going to provide the next breakthrough in understanding or application.
    • by The Panther! ( 448321 ) <panther@austin.YEATSrr.com minus poet> on Friday October 20, 2006 @02:41AM (#16513345) Homepage
      I think you may be missing the point (har har).

      What they are recording is not solely a pixel, I would suspect, but the configuration of mirrors that achieved that point. So, there is a significant amount of information that they can extrapolate from just a random number seed and the final color. The plenoptic function that describes the transfer of light from the environment to the plane of the sensor is 4D. By capturing from many different non-parallel input rays onto a sensor, you can extrapolate a lot about the environment that isn't present in a purely parallel data set.

      What I suspect they're goal is, is ultimately getting an array of mirrors onto a consumer-grade camera, and having it take three or four shots in rapid succession, then merge the information gained from each so that the result is more like having a High Dynamic Range image (well beyond the capabilities of any consumer-grade sensor) and use a tone-mapping algorithm to bring it back into a typical 8-bit range per component. It's complicated, but not impossible. Similar such things that are only a year or two old in the graphics community (flash + non-flash images being merged to give good color in low-light situations, multiple exposure images merged for HDR, etc) should come out in a couple of years as automatic modes for color correction, probably even on low-end cameras.

      Of course, I still have a 6 year old point and shoot, so what do I know? :-)
      • How much space is required to store the mirror configuration and other inputs that allow you to get anything out of the one pixel result?
      • by ceoyoyo ( 59147 )
        People have been doing high dynamic range pictures with regular cameras for a long time. You just take two or more shots at different exposures.
    • If you record things "pseudo-randomly", it'll be harder to get a predictable result

      Odd you'd say that, considering how much computer technology presently relies on the inherent predictability of pseudorandom algorithms.

      Ever called srand() or randseed() ?

  • Other wavelengths (Score:5, Interesting)

    by vespazzari ( 141683 ) on Friday October 20, 2006 @02:11AM (#16513189)
    I have often thought that it would be really neat if you could get a visual image of radio waves like around for example 2.4ghz and be able to see exactly how your surroundings block/absorb/reflect those wave - in addition to seeing sources of the waves. They mention that might be possible by throwing a different sort of detector instead of a ccd in there? anyone know - would that be possible? do 2.4ghz waves bounce off anything else like light does mirrors, without getting scattered?
    • Re:Other wavelengths (Score:5, Interesting)

      by earthbound kid ( 859282 ) on Friday October 20, 2006 @04:42AM (#16513833) Homepage
      Radiowaves are big and they go through just about everything. It would look like a bunch of stuff made out of glass with varying degrees of transparency. Metal things would be darker glass, but anything less than one wavelength in size would be fuzzy and impossible to focus on anyway. In the distance, you would see a bunch of different colored lights flashing where ever there's a radio tower or cellphone. (Each different station would be a different color.) At night, you can see flashes in the sky where distant HAM radio stations bounce off the ionosphere. All your household electronics would glow the faintly in the same 60 Hz color, and you could probably make out all your wiring just sitting in one room and looking around, if it weren't for the fact that it all blurs up due to the size of the wavelength.
      • Re: (Score:3, Interesting)

        Nice vivid description! I would like to render such a scene, but alas, I couldn't model myself out of a wet paper bag. Maybe someone else is up for it?
        • Re: (Score:3, Interesting)

          by ceoyoyo ( 59147 )
          Take some crayons or open up Photoshop and draw some big blobs in different colours. That's what your kitchen would look like.

          Radio waves have large wavelengths and so your resolution is very restricted. Taking pictures of anything that's not a long distance away will give you pretty much the result above.
      • Re: (Score:3, Interesting)

        Well, 2.4GHz is about .125 meters (call it 300/frequency in MHZ), so 1/8th of a meter or so. Things on a human scale would look pretty fuzzy and weird, but not completely unresolvable - you could definitely see pretty well where your wifi sources were.

        60Hz wiring would be so fuzzy as to be useless... but what if you plugged in a little gizmo that put a nice high-freqency signal on the line? That could actually be useful, though it'll be a long time before something like that's practical or remotely cost-e

      • Radiowaves are big and they go through just about everything.

        They don't go THROUGH anything at all (at least nothing conductive). It would be more accurate to say they go AROUND things, but that's not really correct either. It's really a matter of scattering and interference. And they DO interact with things -- your car's radio antenna is not particularly substantial and yet it picks up radio waves.

        In general, a wave will reflect from a conductive surface that is much larger than its wavelength, it wi

  • any astronomy (Score:3, Interesting)

    or low light applications? i wonder what this idea would be like extended to non-electromagnetic phenomena, like electron microscopes, or neutron detectors or nuclear colliders or gravity waves. well, you need mirrors... "micromirrors"... but their are analogs to mirrors in non-electromagnetic phenomena. sort of
  • slow shutter much? (Score:2, Interesting)

    by Wizzerd911 ( 1003980 )
    my 2 MP camera has a hard enough time taking a clear picture when I'm holding it as still as I can and it's got like a 1/60 second shutter or something ridiculously fast like that. If you record an image one pixel at a time, it can't possibly be faster. Even those seemingly magic DLP mirrors couldn't possibly be faster.
    • Re: (Score:3, Funny)

      by Dunbal ( 464142 )
      Even those seemingly magic DLP mirrors couldn't possibly be faster.

            Do not underestimate the power of our shiny disco ball.
  • In fact, the first "TV"s were composed of a spinning disk with holes in front of a photomultiplier tube (the disks scanned the different bits of the image onto the camera), reconstruction was later done mechanically too. Where is the novelty?
  • can't wait (Score:5, Funny)

    by zoefff ( 61970 ) on Friday October 20, 2006 @02:30AM (#16513295)
    can't wait for the first four pixel camera. Imagine the resolution of that one! ;-P
  • by Dirtside ( 91468 ) on Friday October 20, 2006 @02:52AM (#16513401) Journal
    Lock ten marketdroids in a room and give them a task to try and create a marketing campaign for something impossible and ridiculous. Like a one-pixel digital camera.

    I'm envisioning a sticker on the box that reads "THE ONLY MICRO-MEGAPIXEL CAMERA!"
  • So this is in effect doing the reverse of what a CRT monitor does isn't it?
  • exotic sensors (Score:3, Insightful)

    by Lehk228 ( 705449 ) on Friday October 20, 2006 @03:07AM (#16513461) Journal
    this could be useful for imaging in frequencies or frequency ranges where production of a pixel array isn't possible or economically feasable
  • Coming Soon (Score:2, Funny)

    by craagz ( 965952 )
    One Byte Hard Drive
  • by Flying pig ( 925874 ) on Friday October 20, 2006 @03:30AM (#16513567)
    This is a lenseless design and therefore does not have problems of focus. The different parts of the scene should all be in focus simultaneously. There is no sensible way of schieving this with a lensed design since the better the light gathering power, the narrower the plane of focus.

    The technique in use for years for infra-red cameras involves the use of a single (Peltier-cooled) pixel and a scanner, but scanners have numerous problems one of which is that there is always vibration caused by the two frequency components of the line end switching of the horizontal and vertical scans. This technique, by using pseudo-random switching, should eliminate vibration.

    So the ultimate long term goal would appear to be the ability to produce 3-D images with focus throughout the entire scene, low light capability and an absence of blur due to vibration. IANAOR (I am not an optical researcher) but it seems a good line of investigation.

  • by catwh0re ( 540371 ) on Friday October 20, 2006 @04:06AM (#16513691)
    ...but it'd suck to have a dead pixel.
  • by mattr ( 78516 ) <mattr&telebody,com> on Friday October 20, 2006 @04:06AM (#16513695) Homepage Journal
    Pretty surprised at all the dumb comments on this story. The scientists involved are not demeaned by consumers being used to cheap megapixel cameras, nor by a secret lab having done something that sounds similar, nor by some patent existing. Slashdot really sucks!

    If you are interested you can find out a lot about the really fascinating and cutting edge science of computationally assisted optics, or whatever is the correct term. It is the same field as the people who have been experimenting with giant arrays of cheap cameras, capturing entire light fields that can be sliced in time and space and reprojected later on, etc. It is computers plus physics and a big dose of creativity, which is why it is related to SIGGRAPH too.

    Anyway this is interesting and is based on different principles from current megapixel cameras, which is why they think it might improve current cameras too. Just like the way the spaghetti physicists were laughed at by Harvard's igNobel, even though they finally solved something Feynman couldn't crack and have discovered a new method for focusing energy.

    Just off-hand, the one pixel camera and compressive imaging theory they have looks very interesting:
    • A one-chip computer with transmitter, battery and 1 pixel camera could be worn on your cuffs or collar and capture/assemble from random angles through which it is jangled your entire surroundings.
    • Could be used if mounted on a wire tip and wire oscillated giving many views of an object for cheap 3d scanning
    • Camera could include one pixel per range of spectrum, recording a full electromangetic spectrum
    • They are doing only some simple compression right now. If your current camera could do wavelet compression within the ccd you could certainly get much better pictures and reduce the storage needed.
    • If current cameras can do all the work needed in 1/500 of a second that means they could be doing a lot more if only compression, transmission and storage are solved, that is what they are working on.
    • The one pixel camera uses random projections to achieve a certain density of information that seems to be constant throughout the light field they are capturing. This means if they store orientation and time accurately, their data can be sliced at constant quality in any direction, so it is homogenous data which is good. Imagine slicing diagonally through Kraft cheese block or through swiss cheese.
    • Compressive imaging might help video camera manufacturers wrap their heads around recording at far higher frame rates, including side radio bands for orientation, or combining multiple image sources. Compression in the imaging chip means less data to handle elsewhere.
    • If you read some of the bibliography (the Architecture one) you will see use of Haar wavelets to reconstruct an image from a 3-dimensional (200,000 voxel) data structure which performs much better than a 2-d one due to the sparseness of data. This paper also talks about the use of bands for which CCD use is impossible.



    • by Ant P. ( 974313 )
      Camera could include one pixel per range of spectrum, recording a full electromangetic spectrum
      ...did you just invent a tricorder?!
  • Spam (Score:3, Funny)

    by britneys 9th husband ( 741556 ) on Friday October 20, 2006 @04:20AM (#16513737) Homepage Journal
    The spammers have had these cameras for a long time. They're always emailing me the pictures they took with them.
  • ...is when this will cause the price on a Canon 20D to plummet.
  • Some advantages (Score:2, Insightful)

    by WebfishUK ( 249858 )
    I guess that having all your data acquired by a single acquistion element may yield some precision advantages. One of the problems with arrays of elements is that each element will have very slightly different purity levels which can have a subtle effect on the signal acquired. Obviously not much of a issue for visible light photography but in situations where signal levels are very low for instance in gamma ray detection, this may yield benefits.
  • OK the mirrors are micro-mirrors, but I still have concerns with the complexity of this thing. It seems to be counter to the trend of making operations execute in parallel, rather than serially as they are often originally developed.I can see that it may carve a specialised niche for itself, but it doesn't look like it could take over the "happy snaps" market.

    With all the moving parts, how much power does this array consume? What happens if one of the actuators sticks: do you get dead pixel effects?

    • Re: (Score:3, Informative)

      by plover ( 150551 ) *
      Micromirror arrays have been commercially available for ten years now, and had been in design for at least ten years prior to that. They're used in DLP projectors and projection TVs. You can go buy one at Best Buy if you'd like.

      The durability of a micromirror array is actually very high. It's counterintuitive, but not hard to understand. The reason is the mirrors are so tiny. They have very little mass which means they transfer very little stress to their mechanical structure, even under large G forc

  • by hcdejong ( 561314 ) <hobbes@@@xmsnet...nl> on Friday October 20, 2006 @04:51AM (#16513871)
    How can an image which is constructed psuedorandomly ever compare to an image that is compressed using algorithms designed to preserve 'important' information?
    It seems to me you need to assemble the image before you can decide what to throw away.
    • by snarkh ( 118018 )
      The point is that you don't. Basically, the idea is that a random projection will preserve most of the information in the image with high probability.
    • by Peaker ( 72084 )
      As far as I know, JPEG for example, simply transforms the image via matrix multiplication, and then stores with less precision the less important components of the multiplication result.

      JPEG 2000 uses a wavelet matrix, which could be simplified and explained as it would operate on a 2-pixel image. Instead of storing 2 pixel values, you can store the average of the 2 pixels, and the difference between the 2 pixels. That is equivalent. Now you can store the average with high precision, and the difference wi
  • These researchers [stanford.edu] are doing something similar, they are using a photo-resistor as a single pixel camera, and a video projector for illumination. Take a look at the video (63M) [stanford.edu], it is a mind blowing demo of the technology.
  • ...single pixel monitor!
  • here we go again (Score:3, Informative)

    by oohshiny ( 998054 ) on Friday October 20, 2006 @06:15AM (#16514235)
    This kind of thing has been used for a long time: Nipkow Disk [wikipedia.org], Drum Scanner [wikipedia.org]. The combination with micromirror arrays is new.

    However, there's a reason we "acquire first, ask questions later", as the article talks about current systems: electronics is much better at "asking questions" than mechanical hardware.
  • This is ancient history, of course, but if you're interested there's a club for enthusiasts.

    "Mechanical scanning devices which can be used include the Nipkow disc (shown above), the drum, the mirror drum, the mirror screw, oscillating mirrors and combinations of these. The camera usually has a lens to form an image which is then scanned and the light passes through to a photocell which generates the electrical signal" - Narrow-bandwidth Television Association [wyenet.co.uk]
  • Part Number (Score:3, Insightful)

    by ajs318 ( 655362 ) <sd_resp2@@@earthshod...co...uk> on Friday October 20, 2006 @07:03AM (#16514369)
    There has been a single pixel camera available for a long time, under the part number ORP12.
  • This thread is useless without pix. /fark
  • Gee... an array of these has enough information to construct a 3-d image much like a hologram.
  • Also known as a drum scanner. Nothing fancy here, move along....

    Seriously, this is a well known technique. We used to use it to scan large areas of highly variable terrain- the only novelty is the addition of mirrors and the fact that it's 100x faster than in the past.

  • When I was in graduate school, I proposed making an imaging spectrometer based upon the then new digital micro-mirror array, a stationary defraction grating, and a CCD array. I would say that is a fairly similar problem to the idea of making a camera. Some issues as a spectrometer:

    1) In spectroscopy, we have the idea of a multiplexing advantage. This is the increase in signal to noise which occurs from measuring the same information multiple time via its inclusion in a convolution of signals which is later
    • I didn't make this clear, there is a definite multiplex disadvantage for measuring visible light, as noise in the source will be emphasized in contrast to a single measurement, and signal to noise will drop markedly.
  • by swschrad ( 312009 ) on Friday October 20, 2006 @09:50AM (#16515513) Homepage Journal
    I refer, of course, to the flying-spot scanner of early (and sometimes late) television.

    it was very difficult to make a working early camera tube with lame phosphors, flaky passive components, and nightmare wiring. but it was pretty simple to paint a raster on a screen by comparison. so the object to be scanned was put in front of the raster and a single photodiode vacuum tube picked up the changes in brightness, and modulated the "spot" created by the line and position sweep signals.

    old hat by the end of the 1920s, but used as late as the 1980s in super-quaity scanners to encode 35mm and 16mm film for network-quality television. the indian-head generators that took two racks of tubes, and provided the best signal reference at the start of a broadcast day and the best calibration signal for TV repairmen in the field, were all flying-spot scanners.

    no patent forrrrr YOU.
    • by ultramk ( 470198 )
      This is digital, that was analog. No cookie for you.

      Saying that this is the same thing is like saying that optical media (CD et al) are just another form of vinyl record. The principle's somewhat different, even if the method has similarities.

      m-
  • I'm pretty sure this is a 3-pixel camera. The image is in color.
  • I forsee some marketing problems with this technology.

    Customer: How many megapixels is it?
    Salesman: 0.000001!!

  • Do you remember back in the day, every CD player maker competed with the "we have more bits than you" specifications. Well, that soon fizzled because it didn't matter when ultimately the format the disks were encoded in were fixed bit length. But Sony came out with the "1-bit Digital Analog Converter" which is analogous to serial I/O versus parallel I/O. A "simpler" but much faster 1-bit DAC could outcompete a more complex 16-, 24-, 32-bit DAC because it was clocked much higher. It was cheaper, and basi
  • by SIGFPE ( 97527 ) on Friday October 20, 2006 @02:21PM (#16519185) Homepage
    here [mac.com]. It can grab an image using a single photocell. Note that the photocell (1) doesn't move and (2) collects light over a wide angle and yet I can still produce a picture. Yeah, yeah. It's not as good as your camera. But I don't have a multi-million dollar corporation funding me, just $100.
  • I've been thinking for a while that the best way to take a decent photo would be to take a little 1-5 second movie with a wide-spread 1 mega pixel CCD then use the slight movements that you will always get with a hand-held camera to fill in the area between the pixels.

    It occurs to me that if you stuck a camera out the window of a moving car driving down the street you have enough information to make an awesom 3-d panarama of that street (3-d because the moving view gives you the same effect as multiple came
  • IIRC, Steve Ciarcia did this way back in the '80s (or late '70s) with a photocell, parabolic mirror, and servo mechanisms. 16x16x8 bits intensity, IIRC

One man's constant is another man's variable. -- A.J. Perlis

Working...