Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Medicine Security

Professor Kevin Fu Answers Your Questions About Medical Device Security 21

Almost a year ago you had a chance to ask professor Kevin Fu about medical device security. A number of events (including the collapse of his house) conspired to delay the answering of those questions. Professor Fu has finally found respite from calamity, coincidentally at a time when the FDA has issued guidance on the security of medical devices. Below you'll find his answers to your old but not forgotten questions.
Fu: I apologize for the year-long delay, but my queue has rather overflowed after part of my house collapsed. See slide #11 for more information on the delay.

Medical device security is a challenging area because it covers a rather large set of disciplines including software engineering, clinical care, patient safety, electrical engineering, human factors, physiology, regulatory affairs, cryptography, etc. There are a lot of well meaning security engineers who have not yet mastered the culture and principles of health care and medicine, and similarly there are a lot of well meaning medical device manufacturers who have not yet mastered the culture and principles of information security and privacy. I started out as a gopher handing out authentication tokens for a paperless medical record system at a hospital in the early 1990s, but in the last decade have focused my attention on security of embedded devices with application to health and wellness.

I huddled with graduate students from my SPQR Lab at Michigan, and we wrote up the following responses to the great questions. We were not able to answer every question, but readers can find years worth of in-depth technical papers on blog.secure-medicine.org and spqr.eecs.umich.edu/publications.php and thaw.org.



Cochlear Implants
by mcspoo

How secure are Cochlear implants and their processors? Any chance I'm going to hear the voice of God (without the tooth implant, ala Real Genius?)

Fu: Classic cochlear implants are mostly analog circuits with some external supporting software. However, newer implants on the drawing board are looking at how to enable audiologists to adjust implant settings remotely from the cloud. There are, of course, some significant security and privacy issues that need to be resolved. But there are also good reasons for remote access. Namely, patient's bodies change overtime and an audiologist must tune the implant settings manually today. Remote control may simplify the life for patients from a demographic that may have difficulty making office visits.

Cochlear implants are amazing little devices to enable profoundly deaf patients to partially restore hearing. See the cover of Biodesign: The Process of Innovating Medical Technologies by Zenios, Makower, Yock. Also see Ultra Low Power Bioelectronics by Rahul Sarpeshkar. Cochlear implants consist of two major pieces: (1) an implant in the skull that directly stimulates the auditory nerve, and (2) a less resource-constrained external device worn on the scalp. The external device clips onto the scalp with a magnet to keep the implant paired. Think of the implant as special circuitry to wirelessly deliver sound as electrical impulses. Think of the external device as the source of power, sound inputs, and control.

I met a relatively young flight attendant a few years ago who had a cochlear implant. He explained that one day he suffered a routine cold that got worse and caused a rare infection that destroyed his auditory nerve. He lost his hearing. The cochlear implant sufficiently restored his hearing such that he and I could have a normal conversation.

You can imagine the complex security and privacy questions that will need to be considered when future devices go all "Internet of Things" or "TerraSwarm."



PCA Pumps?
by Digital Ebola

Have you explored changing the dosages on drug pumps? Either through exploiting the device directly or by exploiting the database backend? I reference the Hospira pumps that run Linux, allowing one to telnet to them as root with no password authentication. Hospira did issue an update to that but since pumps are so numerous, I'm sure that many hospitals have been slow to update. Thanks!

Fu: Pumps for medicine are amazing. Most people who have visited a hospital or seen a TV show should be aware of the plain old IV drip of saline solution to hydrate patients by gravity. It gets more interesting when a computer-controlled pump takes over from gravity. There are all sorts of pumps ranging from bed-side pumps to implantable pumps.

A PCA pump is short for a patient-controlled analgesia. I believe this question is referring to a bed-side pump rather than an implant. For instance, a patient may receive a PCA pump to deliver controlled pain medication such as morphine. Typical user interfaces consist of a "more please" button that delivers a bolus of drug via an IV.

A number of researchers have analyzed the attack surfaces for insulin infusion pumps, a special kind of externally worn pump for diabetics. Several faculty have done outstanding work in this space several years ago, and more recently a number of smart blackhat researchers have demonstrated the problems in ways more easily understandable by the general public. I think it's fair to say that manufacturers initially underestimated the importance of security requirements engineering during the early concept phases of product engineering. That said, the manufacturers are doing some amazing engineering. There is a game of catch-up, but I am optimistic that the manufacturers will improve by following the new U.S. FDA guidance on cybersecurityin good faith. Some manufacturers apparently have been thinking about security for a while. For instance, members of the insulin pump team at Medtronic recently were issued a medical device security patent filed way back in 2007!

Now on to the real question: what about the backdoor of the pump? No one likes to advertise the unsavory backdoors built into products---some by design and some by accident. It's out of sight, out of mind. On old CAT scans, you'll sometimes even find an "lp" Unix account enabled without a password. I don't know about this particular pump in question, but I would not be surprised if there are some ports left open for debugging or communication with online drug libraries. You will likely find some interesting traffic, perhaps not cryptographically protected, if you listen to the network. If you do find a problem, please be responsible and patient. Finding a vulnerability in a web browser is significantly different from finding a vulnerability in a medical device. The direct consequences on patients must be taken into account, and security researchers not collaborating with a physician are likely skating on thin ice. I recommend that researchers notify the FDA so that they may communicate the problem to the manufacturer. Call up the FDA people listed on the FDA cybersecurity guidance. Or file a MedWatch 3500 report. It once took a year for FDA to process one of my security reports; they are somewhat understaffed. FDA has tens of thousands of employees, but only about two of them focus on security. So be patient. They are good people doing the best they can with their scare resources. Remember, your U.S. readers elected the people who set the budget.



Clinical Data Systems
by DeathGrippe

Most clinics, hospitals, insurance companies and dental offices are extensively computerized and networked. Based on your experience, how often are these systems compromised?

Fu: I find a good rule of thumb to measure security of a clinical environment: count the number of Windows XP boxes. Why? Because these devices are more vulnerable to run-of-the-mill, conventional malware. At one large hospital, medical devices based on Windows XP were re-infected about every 12 days if the box is not protected. With "bandaid" approaches like firewalls and anti-virus, the devices can last longer before re-infection. Alas, you can't make good wine out of bad grapes. Windows XP lacks meaningful security requirements. Microsoft learned its lessons, and has improved the security requirements and approaches over the years. Microsoft ended all support for XP on April 8th of this year.

That said, Linux ain't no picnic either. All operating systems have risks and benefits. I believe the root of the problem is that software security lifecycles for consumer grade operating systems do not align well with the product lifecycles of medical devices. Medical devices need to remain safe and effective for a very long time.



What can I do if I have one?
by AmiMoJo

Say I have an implant that could be hacked, what can I do to protect myself? Are any vendors more reputable than others when it comes to security? Is tinfoil effective? Should I demand my doctor replaces known vulnerable equipment?

Fu: I think patients can take comfort in knowing that FDA has written meaningful guidance on cybersecurity that is likely a game changer for manufacturing. Also, I find that engineers at most medical device manufactures sincerely want to improve the security of their products. This positive attitude is unlike what one will find in adversarial industries like electronic voting where it's more common to see manufacturer denial of risks rather than mitigation risks. I've seen some large medical device manufacturers vendors organize security teams composed of dozens of employees across engineering, sales, marketing, you name it, the whole company. They are beginning to understand that information security and privacy has to become part of the corporate culture if the products make use of modern communication and computer technology.

On the other hand, I don't think you'll ever find a hack-proof computer---whether it be a laptop, smart fridge, or medical device. I used to believe that a computer buried in concrete was secure, until I buried one in the concrete foundation of my house and powered it up wirelessly. You could also go to your car dealer and replace your car with a crash-proof car after you run into a tree. You might get funny looks. A manufacturer cannot eliminate risk, but it can be smart about minimizing risk. For instance, one of the best ways to minimize security risk is to have meaningful security requirements during the concept phase of device engineering. The requirements won't prevent security problems, but lack of security requirements will prevent the product from having meaningful security down the line. One can argue that it's a lot cheaper to engineer security from the start rather than to retrofit, but that argument is no longer necessary since draft FDA guidance on cybersecurity is abundantly clear on expectations for security risk management during the manufacture of new devices.

If I were prescribed a medical device, I would accept it. Why? Anything with a computer is hackable by some adversary. So worrying about whether an implant can being hacked does not help answer the basic question: how to balance risk. If you are prescribed a medical device, then likely your doctor determined that you have a significant, predisposed risk. For instance, you might have a significant risk of sudden cardiac arrest. In general, you are much safer with a device than without.



Re:Start-ups
by Anonymous Coward

How good is malwaresoftware and the WattsUpDoc system at finding something potentially harmful on a device?

Fu: WattsUpDoc is a system that detects malware by analyzing patterns in the power outlet. It's basically a phase shift on the AC power line caused by reactive power and varying loads of the connected computer. The details get hairy and are written for the experts, so I'd refer you to the scientific paper. The beauty is that no software changes are required for the device being monitored (e.g., medical devices).

We published our report on WattsUpDoc at the USENIX HealthTech workshop. There is also a related paper on detecting web browser activity from the power lines. The performance surprised me: 95% accuracy for known malware, and 85% accuracy for previously unknown malware (unlabeled samples of a malware infection that were not in the training set). It works well because medical devices tend to do a small number of different things when working normally. We can detect the deviation.



Should the local IT team have full control over a system
by Joe_Dragon

Should the local IT team have full control over any system in place / should vendors be forced to let systems have AV and OS updates installed on them with out delays?

Fu: Hi Joe the Dragon. I shall call you Trogdor This is a good question, but it technically is a leading question because computing systems created by medical device manufacturers force the IT team to choose between bad and worse. In a more ideal world, we wouldn't need to worry about viruses in the first place. So let me go on a tangent for a moment. Buffer overflows? Maybe that medical device should not be written in C. SQL injection error? Maybe you shouldn't be running a web server with an embedded database inside a life-critical medical device in the first place. The IT folks catch a lot of blame ranging from breaches to clinician complaints of mucking up the clinical workflow. There's some truth to that, but realize that the IT folks are stuck with what they can buy or make.

Ok, now your question: Do you give IT the keys? I'm not gonna be tricked into answering that one. It depends. I think the most effective organizational structures are ones where the clinical safety teams and the IT security teams learn to speak each others' languages. The manufacturers need to be forthcoming about offering regular security updates for underlying 3rd party software if they make the business choice to use COTS software. Hey, COTS software is cheap for a reason. The best situation is when the leaders of these teams do not hesitate to call each other. That said, the most secure system might also be the most unsafe. The most safe system might be the least secure. There are cases where one might forgo security because a safety issue trumps. What if you lock out access to a hypothetical pacemaker after three failed password attempts? Probably not a good idea if you think for a moment. A secure system that cannot deliver care is neither safe nor effective. Striking the balance is tricky.

I have a long rant on software updates (NSFW).



Safer Programming Language
by Anonymous Coward

The C programming language is most often used for embedded devices. The language is poorly specified. Compilers sometimes have issues, and programmers find a zillion creative ways to make mistakes. MISRA C and its enforcement is a bag of hurt in the absence of certified tools. Has there been any work to define a more safe/sane programming language for embedded devices?

Fu: Yes, but it's certainly hard to find in the medical device community. My colleagues from aviation software safety brag about their safer languages and practices, and I do think it's a good idea for the medical device community to borrow ideas from avionics. However, there are a couple roadblocks.

First, there's a crapton of legacy software out there. Try this experiment: walk into the C suite (not the programming language, the corporate suite), then declare that you need to stop product development for 9 months in order to convert to architectures that have better security properties. I know of only one company that did this (hint, it's an automotive company).

Second, the universities are at fault. I once asked a senior engineer at a medical device manufacturer why they wrote in C and assembly for their implantable medical device firmware. The engineer explained, that's who they can hire! The universities produce the graduates, and we are not training them sufficiently for trustworthy computing. When we teach students C and C++, we are handing them loaded weapons. Many of the students are talented and can respect the unchecked power of C and assembly. It's especially good for high performance systems and hand-optimized inner loop code. However, if we want to see improvements in choices of programming languages, universities need to produce engineers who understand the risks of different programming languages. No one language is perfect for every situation. I highly recommend reading Prof. John Knight's book on Fundamentals of Dependable Computing for Software Engineers to learn about how to match the programming language to the risks.



What to do when security is unfixable?
by Anonymous Coward

Seeing the abysmal state of computer security, even basic computer reliability expectations (which Dijkstra already noted, years ago), it's no surprise that embedded systems are no better. Simply because you usually don't see them and are thus less likely to notice just how poorly and insecurely the software is done. So how do we convince these people in the medical apparatus industry to leave well alone with the networking and wireless and bells and whistles, and simply deliver us machinery that does what it does, keep us alive, and not also surf the 'web for cat videos, or leave the door open for someone to come along with the latest exploit kit? Why do these things have to be connected at all?

Fu: A couple responses. A lot of medical devices are not networked in the sense of our home computers on the Internet. Many are connected with sneakernet. Yet the malware still can get in. Sleep labs are notorious for malware because patients bring in USB sticks of music, plus unwanted bonus material. I know one large medical device that was offline, but got infected by Conficker during the split second that the vendor temporarily enabled the Internet connection to download a software update. Sad.

Keep in mind that manufacturers create products because they think they can sell them. If consumers did not express interest in questionably secure products, then we'd see better security. If insurance rates were tied to cybersecurity hygiene, we'd see security economics at work. Unfortunately, security and privacy are out of sight and out of mind as you point out. For instance, hospitals often demand the bells and whistles. I witnessed one physician checking Gmail and the web on a medical records system during surgery. I didn't have a chance to explain the risks of drive-by downloads as he was occupied teaching a young resident how to catheterize the anesthetized patient. I know another hospital system where they let radiologists check email on the medical devices because staff wanted access to email, and there wasn't enough desk space for a second computer.

I have a set of slides on wireless where I make the argument that wireless is like bacon. People think it makes everything taste better. Wireless communication and network connections do serve an important role, but one needs to make a case-by-case judgement for each device. I like the concept of wireless to reduce infection rates during surgical implantations of defibrillators and pacemakers. About 1-2% of implantations result in major complications such as infection, and about 1% of these cases are fatal. Wireless does introduce security risks. While the security architectures can be greatly improved, I'd rather be insecurely alive than securely dead from an infection.



Medical device security vs. Open standards?
by Anonymous Coward

In the ever increasing world of consumerized technology (Apps, smartphones, smarter cars etc.), how do you see medical device security staying relevant and cutting edge while maintaining adequate security? More and more people can and probably will ask "why can't I use with my ?". For instance,could a secure, but open interface be created for Insulin pumps which would allow an end-user app to aggregate multiple data sources into a better snapshot of that person, while still being secure and protected from hijacking by a 3rd party?

Fu: I agree that the natives will get restless if they perceive security as a problem rather than a solution. However, consumers have become accustomed to crap in a hurry during the 1990s transition from postcards to hyperconnected electronic communication. I think it will be difficult to create magic walled gardens or magic interfaces that "add" security because security is not a product, it's a property and a process. I see three areas where one can improve the trustworthiness of medical device software: early concept phases, post market surveillance, and all the fun stuff between (design, implementation, testing, verification, validation, etc.). There's a significant security focus on the implementation and finding bugs, but by that time much of the fate is sealed by the requirements engineering. I think more time should be spent at the concept phase on hazard analysis, risk management, etc. so that implementations are less likely to have security problems. Then spend time on post-market surveillance so you can measure the shifting effectiveness of the security mechanisms as the threats evolve.

Today, the worries are mostly conventional malware slowing down medical devices or causing malfunctions. We've begun to see signs of nation state threats, and we should use our time carefully as threats rarely decrease in severity.

I'd encourage computer science students to work for a medical device manufacturer or FDA rather than the latest Silicon Valley startup. The problems will be interesting and will bring great personal satisfaction. For creative students who enjoy writing and open ended problem solving in health care, apply to graduate schools that carry out medical device security research! Best wishes.
This discussion has been archived. No new comments can be posted.

Professor Kevin Fu Answers Your Questions About Medical Device Security

Comments Filter:
  • wireless is like bacon - best analogy ever. mmm bacon....

  • I find a good rule of thumb to measure security of a clinical environment: count the number of Windows XP boxes.

    Plus, we can't write decent web apps because about 80% of the PCs in a typical hospital are still on IE8.

    • by ColdWetDog ( 752185 ) on Monday October 06, 2014 @01:06PM (#48074361) Homepage

      Why do people want to write medical applications for browsers? Because it's easy, everybody 'understands' javascript (well, sort of). Personally, I think web browsers should be banned from medical areas until it grows up.

      Why? Well, our PACS (program to view X-ray images) works on IE9, our EHR works (when it works at all) best on IE 10 with the IE6 compatibility switch. Our patient education web app works on IE 9 and 10 with IE 8 compatibility. Browsers really limit the UI and waste white space. You have both the web app AND the browser to worry about for security and updates. And what a surprise, updates to the browser trash the application.

      It's fine for Facebook and browsing you favorite adult website. For real work, it sucks.

      • by tomhath ( 637240 )

        I agree with everything you said. Unfortunately, desktop apps have all the same security and incompatibility issues. Plus it's a problem (not unsolvable, but an extra expense) keeping them deployed and up to date.

        What needs to be banned is any browser version more that one release back from current; but that's not going to happen either because so many legacy apps (like yours) won't work. A classic Catch 22, customers can't update their browser because it will break all their apps, but vendors can't write m

        • A classic Catch 22, customers can't update their browser because it will break all their apps, but vendors can't write modern apps because nobody has a browser that can run them.

          May or may not be related to lock in from other products, but at the medical library where I work, our proxy runs into problems with the hospital's computers who use an ultra old version of IE. It didn't crop up until the larger hospital network gobbled up so many clinics and hospitals that had to get a new IP range and we weren't g

      • Browsers really limit the UI and waste white space.

        It's IE's fault that it doesn't have the capabilities of Chrome.

  • by OzPeter ( 195038 ) on Monday October 06, 2014 @12:45PM (#48074181)

    First, there's a crapton of legacy software out there.

    And there you have it. In this day and age we should be using "Metric crapton's", not the older imperial standard.

  • To put things into perspective, if I wanted to kill someone by mucking with their pacemaker, would it be easier to "hack" the pacemaker or just point a tight-beam lethal-to-the-pacemaker microwave at their chest for a few seconds?

    The same goes for other implants: If it's electronic or for that matter even just electrical I can probably disable it at a distance of several yards with a high-power narrow-beam microwave or other purpose-selected radio frequency burst that isn't powerful enough to kill or even

  • Most of the discussion targets devices that are randomly-hijacked because they are built on garbage.

    AFAIK, no one has hacked a device to get it to harm a patient or to suck data/code from it.

    Some hackers are interested in the personal information in some medical records because it's worth something in the Medicare-fraud business, but that's more of a medical-records systems thing, where one would adopt financial-industry standards (not avionics).

    Security is poor in medical devices because there's less risk:
  • Glad to hear the medical device they're about to jam in my pee-hole has firmware written in C. No doubt it would be easy to overflow the numeric field for temperature, causing the device to spontaneously change temperature to 65536 degrees. I'm really looking forward to that.
  • "The Security Ledger reports that the U.S. Food and Drug Administration (FDA) has issued final guidance on Wednesday that calls on medical device manufacturers to consider cyber security risks as part of the design and development of devices."

    Just who in their right minds connect medical device to a publicly accessible network?

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...