Scientists Tout New Way To Debug Surgical Bots 68
coondoggie writes "When it comes to having robotic surgeons slicing around inside your brain, heart or other important body organ, surgeons and patients need to know that a software or hardware glitch isn't going to ruin their day. That's why a new technique developed by researchers at Carnegie Mellon University and the Johns Hopkins University Applied Physics Laboratory that promises to reliably detect software bugs and verify the software safety of surgical robots could be a significant development."
Free surgery? (Score:2)
I guess it's really a public beta...
For real safety (Score:3, Informative)
Do not connect these things to the internet, please.
Re: (Score:2)
but... how else can the doctor perform surgery from the beach? You don't want any sand in your open wound do you?
Re: (Score:2)
The original idea for this robotic surgical equipment was remote operation, so it's pretty certain that's going to happen eventually.
ruin their day ? (Score:4, Interesting)
Re: (Score:3, Insightful)
Re: (Score:2, Insightful)
No, this is what you get with a market-based approach to healthcare coupled with a government-granted monopoly in the form of ridiculous licensing laws.
If you increased the competition these people face, they wouldn't be so smug anymore.
Re: (Score:2)
I am not convinced there is a reliable system short of market forces for rooting out
* People who would take advantage of the system (whatever it is)
* People who would charge unreasonable rates (of course, insurance seriously hampers this)
* Underachievers
Re: (Score:2)
Because somehow a surgical mistake have far more dire consequences in the US than in, say, the UK or Canada.
Re:ruin their day ? (Score:5, Insightful)
If the doctor doesn't ruin your life, then you don't get to ruin his day. Works for both 'concerns'.
Not that I'm sure the touted system will work as planned (it never does) but trying to minimize bad outcomes is sort of the whole point to improving medical practice.
It's not just BMWs and trophy wives.
Re: (Score:2)
If the doctor doesn't ruin your life, then you don't get to ruin his day. Works for both 'concerns'.
How do you get to ruin his day? He has insurance. Know if or how many times your doctor has been accused of or sued for malpractice? Of course not. That information is secret.
trying to minimize bad outcomes is sort of the whole point to improving medical practice.
One of the best ways to do that is to make information about doctors public.
How to debug a surgical bot? (Score:5, Funny)
Just run it through the autoclave.
Re: (Score:2)
Be sure to leave the lithium batteries inside.
(We never did like that particular autoclave anyway.)
Re: (Score:2)
(We never did like that particular autoclave anyway.)
You need to completely disassemble something so every surface is visible in an autoclave. If you don't do that, you don't kill all the bacteria. Which means, you failed at your job -- with potentially lethal results. If you ever put something in that had a battery compartment to begin with, that tells me before you even close the door that you should be ejected out of the fourth floor window in a firey blob as a lesson to other technicians to not mess around with safety procedures.
Re: (Score:2)
In the beginning, there was nothing. Then it exploded.
In an autoclave.
Re: (Score:2)
Before or after it is taken out of the patient? You could debug both at the same time...
Can detect buggy software? (Score:5, Insightful)
wow, does it solve the halting problem as well?
Re:Can detect buggy software? (Score:5, Insightful)
"wow, does it solve the halting problem as well?"
Pretty close to my thoughts as well.
Reliably finding all "bugs" is not possible without an intimate knowledge of exactly what the software is trying to accomplish. You can find certain kinds of errors algorithmically (what amount to "compile time" errors), and even prevent many kinds of run-time errors.
But there is simply no way to prevent the program from doing something unintentional (like cutting the wrong thing) without prior detailed knowledge of the actual intent.
Re:Can detect buggy software? (Score:4, Insightful)
But there is simply no way to prevent the program from doing something unintentional (like cutting the wrong thing) without prior detailed knowledge of the actual intent.
Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.
That is, for hardware, a small change in some parameter will in almost all practical cases result in behaviour that is not wildly different from the behaviour with the original value. This means that we can test it under a finite set of conditions and smoothly extrapolate to the rest.
With software, a single bit flipped can result in behaviour that is arbitrarily different from the original behaviour. As such, nothing short of exhaustive testing over all possible accessible states can prove software correct.
Algorithmic testing of the kind described here will catch some bugs, but provably correct algorithms can still run into practical implementation details that will result in arbitrarily large deviations from ideal behaviour. For example: the inertial navigation module on the Ariane V could have had provably correct code (for all I know it actually did) and the rocket still would have destroyed itself.
Most problems with embedded systems are of the kind, "The hardware did something unexpected and the software responded by doing something inappropriate." Anticipating the things the (incorrectly assembled, failure-prone, unexpectedly capable) hardware might do and figuring out what the appropriate response is constitutes the greater part of building robust embedded code, and this kind of verificationist approach, while useful as a starting point, won't address those issues at all.
Re: (Score:2)
I think referring to it as "useful as a starting point" very much under values this approach. None of that other stuff you referenced matters, at all, if the algorithm is guaranteed to screw up under certain circumstances. The example they gave in the article was of a provably screwed-up algorithm.
However, I'm really commenting along the lines of "tone" or "emphasis". It is true that once you have a known-good algorithm, there's still other stuff that can go wrong that you need to pay attention to in how
Re: (Score:2)
Huh?
I'm pretty sure that the number of possible states in a moderately sized software system can be regarded as unfathomably too large, which isn't "infinite" but as close to "nearly infinite" as you can get... What's possibly saving these guys is that they are probably validating that the robotic "scalpel" is following all possible inputs and positions of the input device in the usual tree cosy and well behaved dimensions... OTH I didn't bother to read the paper, so what do I know...
Besides, what happened
Stateless vs Statefull (Score:2)
Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.
That is, for hardware, a small change in some parameter will in almost all practical cases result in behaviour that is not wildly different from the behaviour with the original value. This means that we can test it under a finite set of conditions and smoothly extrapolate to the rest.
With software, a single bit flipped can result in behaviour that is arbitrarily different from the original behaviour. As such, nothing short of exhaustive testing over all possible accessible states can prove software correct.
More accurately, this is a distinction between stateless and stateful systems. Stateless machines always perform the same operations. They makes them much easier to test than stateful systems whose behavior depends on memory of prior events.
Software is always stateful although it can approach stateless behavior. Analog electronics and mechanical systems are generally stateless. Digital systems can be built either way but registers and memories are pretty fundamental so almost all modern digital chips
Re: (Score:2)
"Software is always stateful although it can approach stateless behavior. Analog electronics and mechanical systems are generally stateless. Digital systems can be built either way but registers and memories are pretty fundamental so almost all modern digital chips are stateful even if they don't embed cpu's and software."
This is all true, but the very nature of the kinds of applications being discussed demand that they be stateful.
Re: (Score:3)
It's doable to ensure a program does exactly what its specs say it should do. See http://en.wikipedia.org/wiki/Formal_verification [wikipedia.org].
It's not often done because it's extraordinarily time-consuming; the time it takes to do goes up exponentially with program length/complexity, I believe.
Re: (Score:2)
It's doable to ensure a program does exactly what its specs say it should do. See http://en.wikipedia.org/wiki/Formal_verification [wikipedia.org].
It's not often done because it's extraordinarily time-consuming; the time it takes to do goes up exponentially with program length/complexity, I believe.
But it's not doable to automatically ensure the specs are correct. i.e. If you have correct specifications for an adult, those might not be appropriate for an child or infant.
Re: (Score:2)
wow, does it solve the halting problem as well?
The insolvability of the "halting problem" is only for arbitrary programs with arbitrary input. Certain defined classes of problems with defined inputs are very solvable.
Re: (Score:1)
Yes.
The halting problem says that you can't determine if a program halts or not in the general case. You definitely can check that for a chosen restricted set of programs (usually relatively small ones, but the core control system of a robot or the avionics control system for an airplane are small enough). See model checking [wikipedia.org] for how this is done in general. The article actually talks about a different technique which is to develop of a methodology for formally proving that class of programs correct, which i
Re:Yes, can detect buggy software. (Score:3)
wow, does it solve the halting problem as well?
That's not insightful, it's clueless about the subject.The halting problem is very rarely a problem in program verification. It's possible to write a program for which halting is undecidable. That's a bug in the program.
The Microsoft Static Driver Verifier, in practice, runs into problems it can't decide about 5% of the time. The goal is to determine that the driver is memory-safe to be in the kernel and calls all the driver APIs with valid parameters. If it does that, the driver can't crash the rest of t
Re: (Score:2)
The software world isn't as aware of the technology.
It's not "less aware of" so much as "this technology is less useful/more difficult to apply in the kind of complex semantic domains where software operates".
Hardware has very limited semantics compared to software: the number of distinct types on a chip is small compared to the number of distinct types in even a moderately complex application.
Each type of thing has its own set of behaviours that have to be encoded in the checking software. This encoding process has proven thus far to have very high difficu
The halting problem is practically solved. (Score:2)
In the general case it is true that you can not tell if an program will halt. However most programs do halt and most programmers understand why. Theorem provers present a language to their users that does not admit the general case in which halting is guaranteed which is part of a more important guarantee that all programs can be reduced to a normal form.
Which means that the halting problem is solved for a program written in a language like coq.
The practical challenge seems to be that formal detaile
Re: (Score:2)
Yes; it's intuitively obvious that this must be the case, but I think someone managed to prove it a few years back. However, there's some doubt if the proof is valid...
This new technique... (Score:3)
Re: (Score:1)
Re: (Score:1)
I bet you say the same thing about stack traces, right?
Re: (Score:2)
I bet you say the same thing about stack traces, right?
No. I prefer such safety back-stops.
.
The reason why I wrote my prior comment was that I read the reason the GNU-Linux people do not want to adopt BSD's strlcpy function is that the strlcpy encourages sloppy coding. The rationale given for that assertion is that strlcpy checks for the destination buffer being too small and zero-terminates regardless (unlike strncpy, which returns a non-zero-terminated buffer if the destination buffer is too small).
I think such types of rationale for not implementing s
Re: (Score:3)
Re: (Score:2)
As opposed to what is being written now, right?
Re: (Score:1)
Most software (games being the obvious exception) has the possibility of creating harm to humans. Payroll, accounting, project management, word processing, etc. all have that potential.
Oh Crap! A Bug in the Debugging Software.. (Score:1)
KeYmaeraD (Score:1)
I believe that all people fall into one of two categories: either they can write good code or they can come up with clever names for things. Clearly this software was written by the former group.
Misleading summary (Score:4, Insightful)
Not just the summary misleading (Score:1)
The whole article is written with the notion of a total replacement debugging/designing solution for all software.
It is really dealing with mechanical and human motion behaviours like inertial overshoot, path avoidances and realistic feedback to the operator rather than things like buffer overruns, atomic accesses and race conditions.
Litton-Radcliffe LR260 (Score:1)
Huey, Louie and Dewey, please report to the Medical Bay.
(I must have gotten the drone mfr/model from the novelization. I don't think it's ever mentioned in the movie.)
undo mod (Score:3)
Patton Oswalt (Score:2)
Am I the only one who read the title and laughed because of a Patton Oswalt joke?
Source: http://www.youtube.com/watch?v=qbai-yBRyHg [youtube.com]
Wonder how they do it (Score:3)
bool DidBrainBotOperateSuccessfullyOn(Patient patient)
{
if ( patient != null && patient.IsVegetable && !patient.WasVegetable )
throw new PendingLawsuitException(patient.Lawyer, Doctor.Lawyers, Hospital.Lawyers, University.Lawyers, Manufacturer.Lawyers);
return true;
}
What they actually do (Score:2)
The problem is to construct a control system such that the robot scalpel (controlled by a person by a force-feedback device) will not exit some pre-defined boundaries. E.g. boundaries that prevent the scalpel cutting a nerve. The preivous system was flawed as it didn't take into account delays in the reponse of the scalpel. By creating a more accurate model, and using methods that are able to prove things about the dynamic system, they prove that usin
My vote would be: (Score:1)
Let's stick these folks who believe in this verification technique underneath the robot as the first patients.
Just do some pointless surgery, open em up, and stitch them back up.
If they survive then trials on other people can start.