Fixing Bugs, But Bypassing the Source Code 234
shreshtha contributes this snippet from MIT's Technology Review: "Martin Rinard, a professor of computer science at MIT, is unabashed about the ultimate goal of his group's research: 'delivering an immortal, invulnerable program.' In work presented this month at the ACM Symposium on Operating Systems Principles in Big Sky, MT, his group has developed software that can find and fix certain types of software bugs within a matter of minutes." Interestingly, this software doesn't need access to the source code of the target program.
One might have the question... (Score:0, Interesting)
was it ever applied to itself? ... and did it gain conciousness?
Who will police the police? (Score:2, Interesting)
Did they use that tool to develop that tool? (Score:5, Interesting)
I wonder if we should turn that software loose on itself and see what it finds.
Obviously Linux developers aren't human ;-) (Score:3, Interesting)
This is absolutely correct, so long as one assumes that Windows systems are the only systems, and Linux developers aren't human.
Re:Why owuld you need to access the source (Score:3, Interesting)
Look at the hex, make changes. The conept is no different then inserting or replacing a JMP to get around software protection.
Exactly! This software sounds like it might work for getting around non-technical vendor-imposed arbitrary limitations.
If you don't feel like paying for the Standard Edition of SQL Server 2005 anymore, now you won't have to, you can just purchase the slightly crippled Workgroup edition, and have ClearView make sure the database keeps on running after it blows by its self-imposed limits. Don't have legal copies of Windows 7, that's ok. Now your government or your office will have a contingency plan, should Microsoft decide to hit the kill switch on you.
Not that I expect this software to work that well. In my mind, there is no substitute for having a real knowledgeable human being tinkering with an hex editor in the same manner as this software will try to do.
That being said, I expect such software to work very well on contrived prepared examples, and I expect such software will make lots of money even if it doesn't work very well in real life. It's the nature of legacy software used in business. You can usually sell any automated magical half-baked solutions for untold amounts money if the customer comes to you at the same point he thinks he's about to lose everything (and has no idea, or no intention, on getting it fixed the right way in the first place).
Re:How about (Score:4, Interesting)
Be skeptical (Score:2, Interesting)
Martin Rinard is a talented man with the largest ego in academia. Of course he is "unabashed"; he's never been "abashed" for a moment in his life. Every research project Rinard has completed has been the one he claimed would scoop and shut down all other computer scientists' efforts. Take any claims he makes with a big grain of salt. It's not that he's a fraud, it's just that history shows he isn't nearly as godlike as he thinks or claims to be.
Posted anonymously because I don't need Rinard as an enemy.
Re:Obviously Linux developers aren't human ;-) (Score:3, Interesting)
The fact that they care far less about backwards compatibility ABI since most things for Linux can be recompiled might have a slight effect on why Linux bugs get 'fixed' faster. You have a different definition of 'fix' than most of the world.
Re:This really deserves (Score:1, Interesting)
This idea can work. It is effectively possible to solve some types of trivial bugs in the executable.
Here is what could go wrong:
1 - A program is written with high security features.
2 - The programmer disable security just for testing. I create such back doors all the time.
3 - At some point the a bug is introduced that closes the back door.
4 - Trying to access the back door causes a trap.
5 - The program passes quality control. Accessing the back door causes an ugly trap but this is a minor issue.
6 - Clearview detect the bug, fixes it and reopens the back door.
7 - Now everyone can access all other accounts.
The root problem is that Clearview does not understand the intent of the program.
Re:How about (Score:5, Interesting)
The sort of "correctness" and "incorrectness" for many security problems are typically "stupid mistakes" nothing very sophisticated.
You're taking too much of the "Ivory Tower Computer Science" view on this. Car analogy - Clearview isn't figuring out whether the whole car is perfect (in the real world it's 100% likely to be imperfect anyway
FWIW I've already manually fixed programs without having the source, and managed to get a program to do stuff the manufacturer said the program can't do
Just because you can't make it perfect doesn't mean you can't make it work better.
Re:Obviously Linux developers aren't human ;-) (Score:1, Interesting)
Not always true. Especially about cohabiting dynamic libraries like glibc, since there's been this exact problem in the past with glibc versions being incompatible and the kernel picking the wrong one to use.
Re:clearview (Score:4, Interesting)
id1: Friar Tuck... I am under attack! Pray save me!
id1: Off (aborted)
id2: Fear not, friend Robin! I shall rout the Sheriff of Nottingham's men!
id1: Thank you, my good fellow!
http://catb.org/jargon/html/meaning-of-hack.html [catb.org]
Re:How about (Score:1, Interesting)
Parent has it right. Clearview's proposed solution is made of rainbows, unicorns and fail.
There's nothing wrong with detecting anomalies, but any attempt to modify the code to prevent the anomaly from forming is misguided at best. Even if the modified program fails to crash and fails to trigger the anomaly detector, there's no way to prove that the program still works as intended. For example, suppose the fix of an overflow also elides the initialization of some other variable, which results in data corruption? How is that better than an overflow/crash?
Therefore the only solution that I'd consider using in production would be to inject an exception that only fires at the point where the system detected the anomaly. Yes, that means my suggestion would only work on languages with a suitable exception model, and that it would only be useful on applications that handle the exception.
Re:It's interesting, but software should "expire". (Score:3, Interesting)
Hah, we're a long way from finishing code to do text boxes and buttons.
There are many improvements:
1) Write them to work with opengl
2) Write them to scale properly at any DPI
3) Have them fully themable via CSS style sheets
4) Have them stylable with SVG files
5) Adding multi-touch support
Also, the linux kernel has something like 17 seperate linked list implementations, each doing slightly different things :)
Re:How about (Score:1, Interesting)
Although a big contrived, AOL used buffer overflows to prevent Microsoft from hijacking its instant messaging software (http://www.securiteam.com/securitynews/2EUQJRFS3U.html).
Fixing bugs without accessing source code (Score:3, Interesting)
I once filed a bug report to a developer with instructions on how to reproduce it.
He responded with a fix that involved no changes to the source code.
He said, "don't do that."
Re: insightful? try stupid (Score:1, Interesting)
I'm sorry you had to come out of 21-month hibernation to say something so stupid, but you're quite wrong.
You can prove that a given change breaks unit tests, except ... wait for it ... they don't use unit tests. Instead they use their rainbows and unicorns magic heuristics. You can't prove anything without a comprehensive per-application unit test suite, so you might as well just replace all the syscalls with nop stubs. For all you know, that's what this program does. That's why this idea is worthless to industry.
Black box voodoo machine code rewriting without a comprehensive testing is pure madness. No serious company would ever consider doing this; therefore the idea is completely worthless as implemented.
Randomly modifying the a program and then failing to test it is worse than doing nothing. Read the rest of the comment you replied to for the correct way to drop a connection. The suggested method allows for unit testing that includes unexpected failures caught at runtime by the watchdog application.