Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Software Bug Science Technology

Fixing Bugs, But Bypassing the Source Code 234

shreshtha contributes this snippet from MIT's Technology Review: "Martin Rinard, a professor of computer science at MIT, is unabashed about the ultimate goal of his group's research: 'delivering an immortal, invulnerable program.' In work presented this month at the ACM Symposium on Operating Systems Principles in Big Sky, MT, his group has developed software that can find and fix certain types of software bugs within a matter of minutes." Interestingly, this software doesn't need access to the source code of the target program.
This discussion has been archived. No new comments can be posted.

Fixing Bugs, But Bypassing the Source Code

Comments Filter:
  • by Anonymous Coward on Thursday October 29, 2009 @07:06PM (#29918069)

    was it ever applied to itself? ... and did it gain conciousness?

  • by ashanin ( 1367775 ) on Thursday October 29, 2009 @07:18PM (#29918187)
    Who will fix the bugs in the ClearView program?
  • by 140Mandak262Jamuna ( 970587 ) on Thursday October 29, 2009 @07:32PM (#29918311) Journal
    My friend developed an automatic code quality estimation program for his masters thesis. It will basically find average the number of lines per function, ratio of code to comment, and other such metrics and give a letter grade to the code. The fiendish prof announced that he will run that code through itself. Whatever letter grade it spits out will be his thesis grade. He got a D. He begged and cried and threw a hissy fit and wangled a B and scraped through the degree.

    I wonder if we should turn that software loose on itself and see what it finds.

  • by Zero__Kelvin ( 151819 ) on Thursday October 29, 2009 @07:32PM (#29918313) Homepage

    "When a potentially harmful vulnerability is discovered in a piece of software, it takes nearly a month on average for human engineers to come up with a fix and to push the fix out to affected systems, according to a report issued by security company Symantec in 2006."

    This is absolutely correct, so long as one assumes that Windows systems are the only systems, and Linux developers aren't human.

  • by stephanruby ( 542433 ) on Thursday October 29, 2009 @08:04PM (#29918625)

    Look at the hex, make changes. The conept is no different then inserting or replacing a JMP to get around software protection.

    Exactly! This software sounds like it might work for getting around non-technical vendor-imposed arbitrary limitations.

    If you don't feel like paying for the Standard Edition of SQL Server 2005 anymore, now you won't have to, you can just purchase the slightly crippled Workgroup edition, and have ClearView make sure the database keeps on running after it blows by its self-imposed limits. Don't have legal copies of Windows 7, that's ok. Now your government or your office will have a contingency plan, should Microsoft decide to hit the kill switch on you.

    Not that I expect this software to work that well. In my mind, there is no substitute for having a real knowledgeable human being tinkering with an hex editor in the same manner as this software will try to do.

    That being said, I expect such software to work very well on contrived prepared examples, and I expect such software will make lots of money even if it doesn't work very well in real life. It's the nature of legacy software used in business. You can usually sell any automated magical half-baked solutions for untold amounts money if the customer comes to you at the same point he thinks he's about to lose everything (and has no idea, or no intention, on getting it fixed the right way in the first place).

  • Re:How about (Score:4, Interesting)

    by raddan ( 519638 ) * on Thursday October 29, 2009 @08:49PM (#29919137)
    You can't write an algorithm that takes as input another algorithm and outputs whether that second algorithm is correct or not. Since ClearView must make this decision somehow (this behavior is bad; make it good), the process cannot be algorithmic. However-- this is exactly how the vast majority of software is written now-- a programmer has a good idea about how to solve the problem, but does not "provably" solve it. If you believe language designers, that's part of the problem. ClearView just adds another layer of heuristics on top of the ones that are already there. Someone has to come up with those rules. This makes the actual work of understanding a program much more complicated. But, you know, the MIT people have been chasing AI for a long time, so maybe they don't think that understanding something is important as long as there's a good simulacrum of the thing they're trying to create. Black box computer science.
  • Be skeptical (Score:2, Interesting)

    by Anonymous Coward on Thursday October 29, 2009 @08:58PM (#29919227)

    Martin Rinard is a talented man with the largest ego in academia. Of course he is "unabashed"; he's never been "abashed" for a moment in his life. Every research project Rinard has completed has been the one he claimed would scoop and shut down all other computer scientists' efforts. Take any claims he makes with a big grain of salt. It's not that he's a fraud, it's just that history shows he isn't nearly as godlike as he thinks or claims to be.

    Posted anonymously because I don't need Rinard as an enemy.

  • by BitZtream ( 692029 ) on Thursday October 29, 2009 @10:16PM (#29919809)

    The fact that they care far less about backwards compatibility ABI since most things for Linux can be recompiled might have a slight effect on why Linux bugs get 'fixed' faster. You have a different definition of 'fix' than most of the world.

  • by Anonymous Coward on Thursday October 29, 2009 @11:01PM (#29920143)

    This idea can work. It is effectively possible to solve some types of trivial bugs in the executable.
    Here is what could go wrong:
    1 - A program is written with high security features.
    2 - The programmer disable security just for testing. I create such back doors all the time.
    3 - At some point the a bug is introduced that closes the back door.
    4 - Trying to access the back door causes a trap.
    5 - The program passes quality control. Accessing the back door causes an ugly trap but this is a minor issue.
    6 - Clearview detect the bug, fixes it and reopens the back door.
    7 - Now everyone can access all other accounts.
    The root problem is that Clearview does not understand the intent of the program.

  • Re:How about (Score:5, Interesting)

    by TheLink ( 130905 ) on Thursday October 29, 2009 @11:17PM (#29920227) Journal
    Clearview doesn't have to figure out whether the entire program is correct. It just tries to fix what's known to be incorrect (and presumably whether it falls into the subset of bugs it knows how to fix).

    The sort of "correctness" and "incorrectness" for many security problems are typically "stupid mistakes" nothing very sophisticated.

    You're taking too much of the "Ivory Tower Computer Science" view on this. Car analogy - Clearview isn't figuring out whether the whole car is perfect (in the real world it's 100% likely to be imperfect anyway ;) ), all it does is help detect and fix the holes in the exterior. It doesn't have to perfectly fix stuff.

    FWIW I've already manually fixed programs without having the source, and managed to get a program to do stuff the manufacturer said the program can't do ;). I've also fixed a TCL program stored in an oracle database by hexediting the oracle DB file, but since that was TCL it doesn't count as "without the source"...

    Just because you can't make it perfect doesn't mean you can't make it work better.
  • by Anonymous Coward on Thursday October 29, 2009 @11:59PM (#29920439)

    Not always true. Especially about cohabiting dynamic libraries like glibc, since there's been this exact problem in the past with glibc versions being incompatible and the kernel picking the wrong one to use.

  • Re:clearview (Score:4, Interesting)

    by Danny Rathjens ( 8471 ) <<gro.snejhtar> <ta> <2todhsals>> on Friday October 30, 2009 @12:25AM (#29920601)
    !X id1

    id1: Friar Tuck... I am under attack! Pray save me!
    id1: Off (aborted)

    id2: Fear not, friend Robin! I shall rout the Sheriff of Nottingham's men!

    id1: Thank you, my good fellow! []
  • Re:How about (Score:1, Interesting)

    by Anonymous Coward on Friday October 30, 2009 @12:41AM (#29920677)

    Parent has it right. Clearview's proposed solution is made of rainbows, unicorns and fail.

    There's nothing wrong with detecting anomalies, but any attempt to modify the code to prevent the anomaly from forming is misguided at best. Even if the modified program fails to crash and fails to trigger the anomaly detector, there's no way to prove that the program still works as intended. For example, suppose the fix of an overflow also elides the initialization of some other variable, which results in data corruption? How is that better than an overflow/crash?

    Therefore the only solution that I'd consider using in production would be to inject an exception that only fires at the point where the system detected the anomaly. Yes, that means my suggestion would only work on languages with a suitable exception model, and that it would only be useful on applications that handle the exception.

  • by JohnFluxx ( 413620 ) on Friday October 30, 2009 @04:02AM (#29921389)

    Hah, we're a long way from finishing code to do text boxes and buttons.

    There are many improvements:

    1) Write them to work with opengl
    2) Write them to scale properly at any DPI
    3) Have them fully themable via CSS style sheets
    4) Have them stylable with SVG files
    5) Adding multi-touch support

    Also, the linux kernel has something like 17 seperate linked list implementations, each doing slightly different things :)

  • Re:How about (Score:1, Interesting)

    by Anonymous Coward on Friday October 30, 2009 @07:18PM (#29930089)

    Although a big contrived, AOL used buffer overflows to prevent Microsoft from hijacking its instant messaging software (

  • by cryptor3 ( 572787 ) on Friday October 30, 2009 @11:13PM (#29931785) Journal

    I once filed a bug report to a developer with instructions on how to reproduce it.

    He responded with a fix that involved no changes to the source code.

    He said, "don't do that."

  • by Anonymous Coward on Saturday October 31, 2009 @04:38AM (#29932895)

    I'm sorry you had to come out of 21-month hibernation to say something so stupid, but you're quite wrong.

    You can prove that a given change breaks unit tests, except ... wait for it ... they don't use unit tests. Instead they use their rainbows and unicorns magic heuristics. You can't prove anything without a comprehensive per-application unit test suite, so you might as well just replace all the syscalls with nop stubs. For all you know, that's what this program does. That's why this idea is worthless to industry.

    Black box voodoo machine code rewriting without a comprehensive testing is pure madness. No serious company would ever consider doing this; therefore the idea is completely worthless as implemented.

    Randomly modifying the a program and then failing to test it is worse than doing nothing. Read the rest of the comment you replied to for the correct way to drop a connection. The suggested method allows for unit testing that includes unexpected failures caught at runtime by the watchdog application.

I've got a bad feeling about this.