Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×
Graphics Software Science

Algorithm Seamlessly Patches Holes In Images 198

Beetle B. writes in with research from Carnegie Mellon demonstrating a new way to replace arbitrarily shaped blank areas in an image with portions of images from a huge catalog in a totally seamless manner. From the abstract: "In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user."
This discussion has been archived. No new comments can be posted.

Algorithm Seamlessly Patches Holes In Images

Comments Filter:
  • Finally... (Score:5, Funny)

    by setirw ( 854029 ) on Thursday August 09, 2007 @09:34AM (#20168825) Homepage
    Uncensored Japanese pornography!
    • Re:Finally... (Score:4, Insightful)

      by mwvdlee ( 775178 ) on Thursday August 09, 2007 @09:43AM (#20168943) Homepage
      The algorithm requires images of similar content which can be used to fill the holes.
      Where are you going to find such images?
    • by mosel-saar-ruwer ( 732341 ) on Thursday August 09, 2007 @09:56AM (#20169119)

      Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user.

      What are the "semantics" here?

      Is this like google images, where the nearby HTML text determines the classification of the image [i.e ASCII-text as meta-data for images]?

      Or is this a great big neural net of wavelet data which classifies the images mathematically?

      PS: I have the same question about that infamous Photosynth/Sea Dragon demonstration:

      http://www.ted.com/index.php/talks/view/id/129 [ted.com]

      • Re: (Score:3, Informative)

        by gurps_npc ( 621217 )
        I think by Semantically different they meant real/meaningfull/non-pointless differences.

        Effectively that means that just as you personally can recognize a bunch of pictures as all being "japanese porn" or all being "pictures of boats", or all being "pictures of men in suits", so can the computer.

        And that number of different categories that humans take pictures of is not that large, probably less than 200,000 different categorizable subjects.

        So with 1 million pictures, you have 5 of any category and can

        • by 12357bd ( 686909 )

          Don't know for sure I didn't RTFA, but:

          1) Human categories are far more than 200000, and even if you reduce the number to a manageable level, they have no graphical definition (suppouse the 'face' category, how could you clasify a cubist picture of a 'face'?), and

          2) The fact that they claim not to use annotations, probably means they are using clustering techniques to detect image groups with a high graphical intersimilarity factor, that's is, the class is not a concept but a bunch of similar pictures, I

        • Re: (Score:3, Insightful)

          by Venik ( 915777 )
          I think this type of classification would be pointless. Even if the computer could somehow differentiate between photos of boats and photos of men in suites floating face down in the bay, this would not help with seamlessly patching holes in images. But the truth is, while software can recognize particular elements of a photo, there is no understanding of of the subject or its context. Thus a missing mountain range in the background can be replaced by an ocean or a parking lot. Whatever categories this meth
      • What are the "semantics" here? Is this like google images, where the nearby HTML text determines the classification of the image [i.e ASCII-text as meta-data for images]?

        No. The matches are done in a purely data-driven manner, meaning by analyzing lots of images and guessing matches. Meta-data appears not to have been used.

        Or is this a great big neural net of wavelet data which classifies the images mathematically?

        Probably a lot closer to the truth.

        Another paragraph gives a clue to how they're usi

      • Re: (Score:2, Informative)

        by dookiesan ( 600840 )
        The paper is published on Efros's website; I don't think neural nets were used, but I only glanced at a it a while ago. A program called 'gist' summarizes all of the images in the database and based on similar summaries they narrow down to a couple hundred images. Then they pick one image and look for a cut line slightly surrounding the missing area which minimizes some criterion. You don't see the seam because they fill in more than just what was missing.
      • They mean from just a set of images (ie, no meta data or other 'context' as you wondered) they can identify similar images to the image to patch and then find good fits in those images. It's probably not that difficult to do, but the approach is the reason for that.

        You just set up some sort of heuristic mechanism to ascertain common elements in picture A and then scan through the set for matches ranking by number of hits.

        It's a lot like a radix trie in that it relies on partially similar nodes/pictures to w
    • Re:Finally... (Score:5, Interesting)

      by pla ( 258480 ) on Thursday August 09, 2007 @10:15AM (#20169381) Journal
      Finally... Uncensored Japanese pornography!

      More seriously, I can see this applied to "fixing" pictures of just about anyone you want to see naked.

      Fake celeb slips will of course come first, but why stop there? That cute girl at the coffee shop? Snap her with the camera phone, erase all those pesky clothes, and let this algorithm do its thing.

      Of course, I could also see this used for more nefarious (even "sick") purposes... Ex-GF cheated and you don't have any nude pics to release to the web? You do now. And if you "repaired" a fully-clothed original of someone underage, would it still count as child porn?

      And I don't even want to think about how the furries would use this... Ugh.
      • by voxel ( 70407 ) on Thursday August 09, 2007 @12:59PM (#20171709)
        I'd go more like this:

        You take a picture of that cute girl at the coffee shop. Snap her with the camera phone, erase all those pesky clothes, and let the algorithm do its thing.

        You wait for the algorithm to finish, it says "Done", you get all excited and click the button to see the result, and.... * DOH *, it put all her clothes back on, albeit a different color and style.
      • by Kjella ( 173770 )
        If you look at the algorithm, it pretty much sucks when there's unclean lines at the edges. It's perfect when you can cut out part of the image along a texture like sea, concrete or to replace entire areas like a wallside. At the very best you'd end up with an ugly half-decapitated hack which you'd have to photoshop. It could probably be used to do things like make a bikini picture into a nude picture by choosing the most appropriate replacement though, but that's roughly as far as you'd get, and even then
      • Re: (Score:2, Insightful)

        by sbate ( 916441 )
        I got this in my head the other day and obsessed about it. I thought this is it. The end of posting any pictures of my kids on line. I thought soon I will want those button sized anti-camera diodes pinned to their shirts like little Orthodox crosses. Of course I thought it through and well what are you going to do? The best thing I can think of is to actually ask the cute girl if she has any pictures of herself naked she may just have some you never know till you ask....
    • by elrous0 ( 869638 ) * on Thursday August 09, 2007 @10:34AM (#20169645)
      Instead of "You appear to be writing a letter. Should I format it for you?" I guess we'll get "You appear to be viewing Japanese pornography. Should I de-pixelate it for you?"
    • by SnarfQuest ( 469614 ) on Thursday August 09, 2007 @10:56AM (#20169911)
      Take one of those celebrity nude photos with pixelated parts, cut out the pixelated parts, then run this on them.

      You get photos of the celebrities, wearing japanese clothing!
    • Sorry to disappoint you, but IIRC from the SIGGRAPH lecture, I think they removed people from the database.

      It did work quite well, however, offering a choice of different image completions.
  • Broken or flaky video files. Nothing is more irritating than an mpeg, etc error that causes an entire block to go black and smear itself all over the place until the next keyframe. I don't expect realtime correction, but it would be nice if I could patch the file rather than do another six hour encode.
    • by CaptainPatent ( 1087643 ) on Thursday August 09, 2007 @09:43AM (#20168957) Journal
      Unfortunately, I think this particular algorithm would need a base set of data to begin working. While I'm sure portions of this algorithm could be implemented for such an application, it seems a base set is needed in a single image, therefore a full blank screen from a dropped frame or damaged images showing bad colors would not be successfully mended.
      If, on the other hand, you were a movie producer and needed to get rid of the frame change holes after loosing the master print of a film, you perhaps would be able to use such a program to mend those holes in the upper corner.
    • by MobyDisk ( 75490 )
      You might be better to do another six hour encode, depending on what is involved in this algorithm. I bet you would have to go through each frame, mark which areas a wrong, then obtain a library of semantically related images, then run the program against the frames that you marked bad - and still not have the right result.

      For what you are trying to do, a better idea would be to copy data from the previous and next frames. That's what they do when they "digitally remaster" old films to get rid of scratche
  • w00t! (Score:5, Funny)

    by morgan_greywolf ( 835522 ) on Thursday August 09, 2007 @09:37AM (#20168867) Homepage Journal
    It was as if a million fake celebrity pr0n websites cried and were suddenly silenced...
  • Dead (Score:5, Informative)

    by Spad ( 470073 ) <slashdot@s[ ].co.uk ['pad' in gap]> on Thursday August 09, 2007 @09:40AM (#20168901) Homepage
    Slashdotted already.

    BBC News coverage of the story is here: http://news.bbc.co.uk/1/hi/technology/6936444.stm [bbc.co.uk]
  • ehhh.... (Score:4, Interesting)

    by way2slo ( 151122 ) on Thursday August 09, 2007 @09:44AM (#20168965) Journal
    ...call me when they make this into a plugin for Photoshop.
  • GREYCstoration (Score:5, Interesting)

    by BlackPignouf ( 1017012 ) on Thursday August 09, 2007 @09:44AM (#20168969)
    And if you dont have any pictures database, there's always GREYCstoration:
    http://www.greyc.ensicaen.fr/~dtschump/greycstorat ion/index.html [ensicaen.fr]
    It's pretty impressive:
    http://www.greyc.ensicaen.fr/~dtschump/greycstorat ion/demonstration.html [ensicaen.fr]
    and works with the gimp.
    • I have played around with it and wasn't too impressed.
      This one looks bad...
      http://www.greyc.ensicaen.fr/~dtschump/greycstorat ion/img/res_parrot.png [ensicaen.fr]

      This one you can mouse-over and have the image change on you. Again, not that impressed.
      http://www.greyc.ensicaen.fr/~dtschump/greycstorat ion/img/res_claudia16.html [ensicaen.fr]

      Looks very blurry.

      This one is actually okay, but its because its an owl. I'm sure all the blurring is there, its just hard to notice.
      The eye's pupil isn't as rounded as it could have been.

      http://ww [ensicaen.fr]
      • Re: (Score:3, Insightful)

        by cluke ( 30394 )
        Are you kidding me? Removing stuff like fences and overlaid captions from photographic stills and making a damn decent fist at filling in what would be underneath? You are a hard man to impress!
    • by zalas ( 682627 )
      Inpainting works fine for the examples they gave on the GREYCstoration page, but it totally fails at large, contiguous areas. GREYCstoration's target problem is to restore thin or small areas whereas this paper seeks to restore larger areas. If you tried to inpaint, say a large block of a photograph that is just ocean, it will fail miserably.
  • My first thought on this was how easy it will be change just enough of a picture before releasing it to make it incredibly tough to find out the way you're representing the picture isn't the way it really was.
  • by iknownuttin ( 1099999 ) on Thursday August 09, 2007 @09:50AM (#20169035)
    I take a picture of a hole?
    • Re: (Score:2, Funny)

      by Anonymous Coward
      It will insert two hands, gripping the sides.
    • I think that's what a previous poster was thinking about when he mentioned Japanese porn and de-pixelating... ;)
    • turn the photo into an advertisement of DNF?
    • by RuBLed ( 995686 )
      like the one on Mars? Someone should run the algorithm on that one, it would solve the question bugging us for so long...
  • by Opportunist ( 166417 ) on Thursday August 09, 2007 @09:51AM (#20169055)
    You never know what that "kinda-like" picture used to patch contains. You might get the opposite of what you want.
  • Image compression? (Score:5, Insightful)

    by grimJester ( 890090 ) on Thursday August 09, 2007 @09:54AM (#20169085)
    If any hole in the image can be filled with a part of another pic, can't you compress an image by replacing one piece at a time with a reference to a patch? Also, how about replacing with patches of higher resolution than the original? I realize it would all be technically lossy as hell, but the compression artifacts should not be very noticable to the human eye, right? Additionally, how about using this for movie compression? Filling in based on info from previous and next frame.

    I may have to actually RTFA this time.
    • by Anonymous Coward on Thursday August 09, 2007 @10:17AM (#20169399)
      If any hole in the image can be filled with a part of another pic, can't you compress an image by replacing one piece at a time with a reference to a patch?

      That only works if your patch addressing space takes less space than the bits you're replacing - and of course when you reload the image, you'll still get say a cat instead of an iguana in that window...

      Also, how about replacing with patches of higher resolution than the original? I realize it would all be technically lossy as hell, but the compression artifacts should not be very noticable to the human eye, right?

      I'm not sure you really understand the concepts here. Replacing a patch with a higher res would be possible (but you'd have to resample the image first, basically) - and would either be incredibly lossy or perfectly unlossy, depending on your viewpoint.

      From a compression standpoint there's no reason to consider a high res replace as more lossy as anything else. From a recognition standpoint, whether you're doing it high res or not, this would be a method that throws out image details for others... but that doesn't have anything to do with the resolution. So this is a lossy image manipulation, but not really a compression...

      And of course, none of that would cause any compression artifacts, so yeah the human eye wouldn't notice (assuming this software works as claimed)

      So to go back over the concepts:

      Lossy - a compression or manipulation to an image or other digital file from which you cannot reconstruct the original bits perfectly

      Compression Artifact - a noticeable image tearing or other visual defect allowing one to differentiate between a lossy-compressed file and it's original

      Additionally, how about using this for movie compression? Filling in based on info from previous and next frame.

      That's how movie compression came about. The first moving-file format that was widely available was animated GIF - which quickly got onto the trick of using transparent pixels for non-changing parts of a scene.

      MPEG (1) one upped it; one part of the spec specifies which blocks are to be sent in each frame; you can leave out any blocks you don't want... (they also smartly seperated the chrominance and luminance channels, and subsampled the chrominance channel - not only is it a smart compression as the human eye perceives luminosity at greater fidelity than chorminance, but it also ups the chances that you don't have to transmit some blocks)

      Fast Forward to MPEG-4 (non-H.263) - same basic block structure, same ability to not draw blocks, and now you can even specify offsets for blocks - you have probably heard this technology referred to as motion compression - basically if something is moving on the screen but remains relatively the same pixel values regardless of motion, the movie file will record the motion without recording every pixel - the difference between a good MP4 compressor and a bad compressor mostly has to do with how well it identifies candidates for motion compression, is my understanding...
      • I'm not sure you really understand the concepts here. Replacing a patch with a higher res would be possible (but you'd have to resample the image first, basically) - and would either be incredibly lossy or perfectly unlossy, depending on your viewpoint.

        From a compression standpoint there's no reason to consider a high res replace as more lossy as anything else. From a recognition standpoint, whether you're doing it high res or not, this would be a method that throws out image details for others... but th
    • ...I may have to actually RTFA this time.

      Don't you do it! If you do, you can hand in your /. card, tazer, and key to the clubhouse, 'cause you're out of the club.

      Next thing you know, you'll be reading the flippin' article and posting insightful things that the rest of us, who spend our 9-5 journey together every day, will have no way to counter unless we start to read the freakin' articles! This will have an impact beyond what you realize. For the good of the greater, don't RTFA!

    • by whyde ( 123448 )
      Additionally, how about using this for movie compression? Filling in based on info from previous and next frame.

      They could just make Speed 2 a reference to Speed and be done with it, for example.
  • pfft (Score:5, Funny)

    by Shadow Wrought ( 586631 ) * <[shadow.wrought] [at] [gmail.com]> on Thursday August 09, 2007 @09:55AM (#20169107) Homepage Journal
    CSI Miami and NY have had infinite zoom capability with photos for years, and you excited about this? Bah.
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      Blowing up pictures a really huge number of times is nothing new.
      The film 'Blow Up' Starring David Hemmings was made in the 1960's (AFAICR, As far as I can remember...) It used this as the core of the film plot.

      Now, as a photographer I am not overly worried by this until these doctored images find their way onto sites like Wikipedia and people start using them as the 'Real Thing'.

      There are laws in some countries that stop advertisers using 'doctored' images in things like holiday Brochures. Once we start ge
      • ...it's not like this isn't happening already. Any professionally produced advertising will have pictures that have been tidied by photoshop at the ad agency already. Sure, it'd be false advertising to doctor these to remove building sites near holiday complexes, etc, but it's not like it can't and isn't being easily done right now.
  • by 192939495969798999 ( 58312 ) <.info. .at. .devinmoore.com.> on Thursday August 09, 2007 @10:05AM (#20169231) Homepage Journal
    It takes an existing image and finds a very similar image in a huge catalog, then adds in a similarly-shaped piece to the existing image where applicable. So it's more like a puzzle solver than an image completion engine. If you don't have a huge, huge catalog of images, it won't really work for any given image as well as their samples.
    • It's not that bad, all you need is the original full size image without the blurring and the prOn site address overimpression.
    • by pla ( 258480 )
      If you don't have a huge, huge catalog of images, it won't really work for any given image as well as their samples.

      So basically, most of us could only fix images missing pink/beige* areas.

      I'll still take it. ;)



      * - Nothing racist intended; most online porn quite simply features caucasian or light-skinned asian models, like it or not.
      • by JDevers ( 83155 )
        Dude, there is a LOT of black and Hispanic porn online, you just have to know where to look. Specialization is the key, the predominant audience is white or Asian, so the "average" site shows that...but there are LOTS of black or Hispanic sites.
    • by MythMoth ( 73648 )
      The internet being what it is, I suspect that the software will be subverted to allow you to digitally remove peoples' clothes in three... two... one... :-)
  • Due to recent advances at Carengie Mellon, you have all been made redundant by a computer algorithm. Sorry, progress is a biatch.

    Yours,
    some code and a database
    • That is the problem for people who specialize in a technology, sooner or later it will become obsolete. People need to stay ahead of technology not just keep up with it. If your job requires you to do stuff that can be easilly written down so someone can pick right up and do it, there there is a good chance it could be obsoleted soon. But if your job requires you to be activly thinking beyond pure logic then you may have a chance to stay ahead of technology.
  • Howabout... (Score:2, Interesting)

    by SimonGhent ( 57578 )
    .. a picture of yourself, with your face blanked out... whose face would you get?
    • Re: (Score:3, Funny)

      by Opportunist ( 166417 )
      Depends, will they have access to all the pics the NSA gathers from people travelling in and out of the US?
    • by E++99 ( 880734 )
      Whomever dresses the most like you. (Or stands in the most similar location when getting their picture taken.)
  • by sjaguar ( 763407 ) on Thursday August 09, 2007 @10:36AM (#20169661) Homepage
    "this section has been intentionally left blank"
  • Not "Arbitrary" (Score:4, Informative)

    by doug141 ( 863552 ) on Thursday August 09, 2007 @10:43AM (#20169727)
    Although the summary says the method will fill arbitrary holes, at the link that claim is not made, and in their examples they delete specific picture elements.
  • Reminds me of the recently announced Photosynth [live.com] from Microsoft that seems to do something similar, but focuses on stitching images together rather than replacing parts of existing ones.
    • by zalas ( 682627 )
      Apart from the registration of images to each other, they are very different systems. Photosynth is used to organize a collection of similar images into a cluster and register them with each other in order extract an intuitive 3D browsing method of photographs of the same thing. It requires a tremendous amount of pre-processing to use Photosynth. For this hole-filling algorithm, although it is searching for matches as well, it is doing so in a less strict fashion; it merely tries to find "similar" images
  • by Urban Garlic ( 447282 ) on Thursday August 09, 2007 @10:59AM (#20169945)
    I wonder if this is part of the beginning of a new, computationally-driven problem-solving paradigm. As more and more data is stored, and if search algorithms become more and more clever, the cost of "looking up" (computationally speaking) the answer to a problem might be lower than the cost of "remembering" (using local storage) or "figuring out" (using local CPU power) the answer.

    This is already happening informally in the personal sphere, because of things like Google, recently amplified by the iPhone and its inevitable successors in the ubiquitous rapid-access web-tool field. As they say, these days, if you have a web browser, you hardly have to wonder about anything anymore.

    Of course, problem solving by search isn't exactly a new paradigm, but it could be a newly-cheap paradigm.
    • I wonder if this is part of the beginning of a new, computationally-driven problem-solving paradigm. As more and more data is stored, and if search algorithms become more and more clever, the cost of "looking up" (computationally speaking) the answer to a problem might be lower than the cost of "remembering" (using local storage) or "figuring out" (using local CPU power) the answer.

      No, it's part of an existing, computationally-driven problem solving field that has existed for decades.

      And don't refer to a

    • by zalas ( 682627 )
      When they gave this presentation at work, one of the things they wanted to stress was that before, scientists were coming up with clever abstractions and trying to develop an artifical intelligence to solve many problems. Nowadays, if there is enough data, using the data in search/matching will rival the power of the "AI." For example, let's suppose our problem is OCR of people's handwriting. Without enough data, we might want to construct a "smart" approach whereby we analyze a few people's handwriting
  • Group gives paper at conference, then endlessly spams media with paper.
  • you get if you unwisely follow the instructions on the other current front-page article about building your own high powered laser? That's uncanny.
    • you get if you unwisely follow the instructions on the other current front-page article about building your own high powered laser? That's uncanny.

      Actually, your eyes already come pre-equipped with exactly such an algorithm [brynmawr.edu] (needed for the naturally occurring blind spot at the space where the optical nerve is attached to the eyes). And apparently, it happily works for the extra, laser-induced blind spots as well.

      Browse around the site. Not only does it fill in uniform background color (easy), but also more complicated pattern (lines going through the blind spot), and even autocompletes repeating pattern (field of red circles).

  • by dpbsmith ( 263124 ) on Thursday August 09, 2007 @11:43AM (#20170541) Homepage
    This is very cool, and I wonder how similar it is to what the brain does with respect to blind spots?

    For those who don't know: each eye has a surprisingly large blind spot at the place where the optic nerve enters the eye. At reading distance, in the right eye, it's about four or five inches to the right of the spot at which you are gaving, and many textbooks and "fun with optical illusions"-type books will have a diagram... like the one on this web page... [brynmawr.edu] and directions for finding it. The blind spot is much larger than the dot on that web page, incidentally. If you explore, you'll find that... at the distance at which the dot disappears... the blind spot is nearly an inch wide and an inch-and-a-half high.

    Even allowing for the fact that each eye has the blind spot in a different place so they fill in for each other, once you discover how big the blind spot is... and how relatively close to your position of gaze it is... you'll be astonished that almost nobody notices it until it is pointed out.

    The brain does something more or less like filling in the blind spot. I say "more or less like" because it is very hard to answer the question "what do you see in the blind spot." For example, if you hold a computer keyboard at the right distance so that you're looking at the "G" key and the "K" key is in your blind spot, what do you see? Certainly not a black spot, certainly not a white spot, certainly not a "hole" or emptiness. Probably you have an impression of computer keys. Do you see a letter K? Certainly not, yet somehow you don't see a blank key, either.

    Incidentally, I used to suffer from migraine headaches, and one of the symptoms for some people is the formation of blind spots which can be even larger than the "normal" blind spot, and can appear in central vision. One one memorable occasion, I was looking at the cover of a hardbound book, and I can tell you that when I looked at the title, my perception was the stamped, printed title disappeared, yet I would have sworn in a court of law that I still saw the cloth texture extending across the blind spot.

    Although he does not specifically refer to it as a migraine illusion, I believe Lewis Carroll was known to be a migraineur, and in Chapter V of Through the Looking-Glass, "Wool and Water," Alice notices that "The shop seemed to be full of all manner of curious things -- but the oddest part of it all was that, whenever she looked hard at any shelf, to make out exactly what it had on it, that particular shelf was always quite, empty, though the others round it were crowded as full as they could hold." Any migraineur who experiences central blind spots will recognize this description.

    Hays and Efros' system--relatively-simple algorithm operating on a large database of previously-seen images--seems to me to be the sorta-kinda way in which one could imagine the brain working.

    I wonder if there's any way to test this?
    • The brain fills in the blind spots only with very recent images (within 1 second perhaps) or patterns from nearby areas.

      It doesn't consult a big archive or do "semantic" matching or anything tricky.
  • However, where did they get the database of images from? If they pulled the content from the web, certainly there are copyright issues (at least if they commercialize such an engine including that database). It might be hard to tell where the source came from, but if they're profiting off the use of my imagery, I'd expect a cut.

    It would not seem to me to fall under fair use...

    MadCow.
  • Photoshop is now only to be used for high profile history; lesser history is to be automated.

    IGNORANCE IS STRENGTH
  • Grassy Knoll (Score:2, Interesting)

    by waterlogged ( 210759 )
    Does that mean that if I erase part of the grassy knoll it will draw in the second gunman? Or more importantly if I erase the goatse hole will it draw in a hole?
  • Reuters implements this technology immediately, and some right-wing bloggers wondered where all the fake photographs went.
  • ... on the GOPs public image appear promising.
  • I'm not sure what their current process is (I had assumed it was costly and manual), but this might be great for removing dust and and excessive grain from old movies and TV shows. Perhaps that'll actually make it worth it to buy the old stuff on HD/Blueray.
  • ... so I'll be able to see in the blind spot that I created using my new high-power DVD-burner laser pointer flashlight?

1 1 was a race-horse, 2 2 was 1 2. When 1 1 1 1 race, 2 2 1 1 2.

Working...