Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Government Medicine Communications Network Networking Software The Internet Science Technology

FDA Releases New Cybersecurity Guidelines For Medical Devices (theverge.com) 40

An anonymous reader quotes a report from The Verge: The U.S. Food and Drug Administration released its recommendations for how medical device manufacturers should maintain the security of internet-connected devices, even after they've entered hospitals, patient homes, or patient bodies. Unsecured devices can allow hackers to tamper with how much medication is delivered by the device -- with potentially deadly results. First issued in draft form last January, this guidance is more than a year in the making. The 30-page document (PDF) encourages manufacturers to monitor their medical devices and associated software for bugs, and patch any problems that occur. But the recommendations are not legally enforceable -- so they're largely without teeth. The FDA issued an earlier set of recommendations in October 2014 (PDF), which recommended ways for manufacturers to build cybersecurity protections into medical devices as they're being designed and developed. Today's guidance focuses on how to maintain medical device cybersecurity after devices have left the factory. The guidelines lay out steps for recognizing and addressing ongoing vulnerabilities. And they recommend that manufacturers join together in an Information Sharing and Analysis Organization (ISAO) to share details about security risks and responses as they occur. Most patches and updates intended to address security vulnerabilities will be considered routine enhancements, which means manufacturers don't have to alert the FDA every time they issue one. That is, unless someone dies or is seriously harmed because of a bug -- then the manufacturer needs to report it. Dangerous bugs identified before they harm or kill anyone won't have to be reported to the FDA as long as the manufacturer tells customers and device users about the bug within 30 days, fixes it within 60 days, and shares information about the vulnerability with an ISAO.
This discussion has been archived. No new comments can be posted.

FDA Releases New Cybersecurity Guidelines For Medical Devices

Comments Filter:
    • by grumling ( 94709 ) on Wednesday December 28, 2016 @08:27AM (#53565141) Homepage

      "But we followed all the guidelines as set forth by the FDA, so we're not liable."

      -Every medical device manufacturer after they've been hacked.

      • by Anonymous Coward

        Following FDA guidelines does not shield you from negligence or errors in the design of a medical device.

        However, I do develop software for medical devices. I read through that document and found nothing useful in it.

        It seems to say:
        1) here is the problem (those of us in the field already know this).
        2) You should evaluate this risks(duh!).
        3) Think of ways to fix the problem.
        4) Cybersecurity risk management should be part of the development process.

        All of these are fairly obvious. For those of developing the

        • by MobyDisk ( 75490 )

          Following FDA guidelines does not shield you from negligence or errors in the design of a medical device.

          While it is not a perfect shield, following those guidelines is a pretty darned good one. This applies to many industries. Suppose a commercial boat sank and killed people. The first thing that will come-up in court is coast-guard regulation. Did the captain have a captains license? Was the boat recently coast-guard inspected? Did they have the proper number of passengers and coast-guard approved life jackets? If so, they the operator of the boat is probably not liable. Seriously: If they followed a

        • Fairly obvious it might be, but it's also fairly obvious that quite a lot of manufacturers simply chose to ignore the problem.

          And why?

          It costs money to think of the problem, it costs even more money to evaluate any risks, it costs still more money to think about fixing them, and it's downright expensive to actually implement any fixes.

          Incorporating cyber-security risk management will be part of the development process as soon as people are willing to pay for it. Which they aren't, because they can't s

    • by mean pun ( 717227 ) on Wednesday December 28, 2016 @09:26AM (#53565331)
      A decent government tries to make their populace safe. Defining safety guidelines helps with this. Please try to appreciate the remnants of decent government that still have survived rather than score cheap talking points by playing the cynic. The flocks of horror clowns that, for example, the people in the US prefer to elect nowadays on all levels of their government will try to demolish these remnants soon enough.
  • by Joe_Dragon ( 2206452 ) on Wednesday December 28, 2016 @08:25AM (#53565137)

    What about makeing os updates happen? and letting the local IT staff lock down there network and not be forced to have some things wide open to the outside vendor.

    • Dangerous bugs identified before they harm or kill anyone won't have to be reported to the FDA as long as the manufacturer tells customers and device users about the bug within 30 days, fixes it within 60 days, and shares information about the vulnerability with an ISAO.

      What about the Windows family of OS's

  • But the recommendations are not legally enforceable -- so they're largely without teeth

    My authoritah !!

  • Not enough (Score:5, Interesting)

    by NotInHere ( 3654617 ) on Wednesday December 28, 2016 @08:38AM (#53565181)

    Its not enough to just be reactive about computer security. This still means that sophisticated attackers can hoard security vulnerabilities and develop advanced tools that find vulnerabilities the moment they are introduced. Instead, you should already design the system in a way that frustrates attacks and hopefully prevents some attacks entirely. A good talk about this:

    https://www.youtube.com/watch?... [youtube.com]

    Slides:

    http://outflux.net/slides/2016... [outflux.net]

    • A noble effort, but tools like ASLR have in practice shown themselves to be less than bulletproof. They're worth pursuing to an extent, but they aren't enough for a truly robust preemptive security strategy and shouldn't be our primary focus.

      The future of secure computing is obviously a ground-up microkernel affair [genode.org] that allows strong sandboxing. But that's going to take... a while to mature, in terms of native application support. In the meantime, a stopgap using paravirtualization and hardware assisted
  • . . . but an ongoing process, and this does not appear to recognize that. Especially with medical device certification: make a change, and generally the entire system must be re-certified.

    So if a flaw is found in $libfoo, and fixed, FIRST the med companies need to know about it (not sure they even LOOK. . . ), and THEN update. And yet all too many Medical systems I've seen are still running Windows XP. . . How do you remediate something that's out of support ???

    • Re: (Score:2, Informative)

      by Anonymous Coward

      And yet all too many Medical systems I've seen are still running Windows XP. . . How do you remediate something that's out of support ???

      Two different ways.
      The first one is to go through certification with the entire operating system. If you have been proven that it is safe then you don't really need to do further updates.
      The other way is hardware separation between the critical and non-critical parts.
      Most likely the life critical function was done with a separate controller that either doesn't run an operating system at all or runs an operating system that has been certified for medical usage. The windows application you saw is just the GUI

      • In the end the data from one part needs to be interchanged. No matter how unhackable the one part is it can still display false data in the GUI that causes the patient to die because the instruments say he's fine when he's dying.
    • They've progressed on that front: fix a flaw and you don't need recertification or even to alert the FDA.

      I'm more fond of a single-point-of-failure approach. To access a device, I'd want to connect to it in open mode first, and establish a two-way trust. The device would generate and install its own security key pair, and the device with which I connect would also generate a key pair. Then I could sign a new certificate with my device and use that as an authorized certificate: we just exchanged our

      • by Doke ( 23992 )

        The FDA may not require recertification after a supposedly minor software update. However, most companies would still want to go through QA again after a software change. This is because they're worried about medical liability. That's going to take time and money. It also creates a moving target for their developers. So they have an incentive to avoid OS upgrades of any kind.

        Since they're worried about liability, they will do anything they can to disclaim responsibility. Local hospital staff altering

        • QA doesn't take additional time and money; QA is an integrated part of the project management and, afterwards, the ongoing maintenance process. It's a critical component of the product development lifecycle required to minimize the time and money taken to produce and support a product.

          QA reduces costly support calls, costly lawsuits, and costly rework. Without QA, you send out products that eventually need bugfixes, and then have to be reworked. Units on shelves are now inadequate, and must be recalled

  • by nefus ( 952656 ) on Wednesday December 28, 2016 @09:21AM (#53565319) Homepage
    Is this the same federal government that still hasn't released full security guidelines for Hippa? And how much older is the Hippa issue?
  • by ArhcAngel ( 247594 ) on Wednesday December 28, 2016 @09:22AM (#53565321)
    BlackBerry may have conceded the mobile handset business but they own the automotive market. (Both Android Auto and CarPlay run as plug-in modules for BlackBerry's QNX Car OS) And for the last four years they have been quietly buying up medical electronic companies left and right and integrating them into QNX. [qnx.com]
  • No Theeth, Eh? (Score:2, Insightful)

    by Anonymous Coward

    But the recommendations are not legally enforceable

    Sure it is. Once a manufacturer is sued for negligence, the plaintiff can point out any failures to follow the FDA recommendations. It is a "soft" argument.

  • The FDA's guidlines are not enforcable. So the hospitals will care a lot more about anything in the manufacturers' EULAs that let them disclaim liability.

    A lot of medical equipment, especially large scanners, are controlled by a PC. The embedded OS may be something real time, but the control PC almost always runs Windows. It does higher level functions like visualization, printing, etc. Since these devices are large capitol investments, they're kept for many years. Most doctor's offices don't have the

  • by MobyDisk ( 75490 ) on Wednesday December 28, 2016 @12:54PM (#53566275) Homepage

    I see several posts that amount to "These guidelines are too vague." Understand how regulations work, and that they are a trade-off: governments are not going to give a specific detailed list of how to address, for example, security concerns in software and hardware. They will not say "All passwords must be length X" or "Passwords must be stored using PBKDF2 hashes with N number of rounds" or "All secure access doors must use a lock with at least 15 tumblers in it." While such guidelines would make liability a lot simpler, they would need to constantly change and no one could ever agree to them.

    Instead, the regulations say things like "Use industry-standard best practices when securing patient data." Now that sounds completely obvious to any professional, but understand that security breaches happen most often because these kinds of obvious things are missed!

    They then expect the industry to take those high-level requirements and use a bit of self-regulation to determine what exactly the regulations mean. And that naturally changes over time, which is the intent. For example, the industry has decided that this means web sites should follow the OWASP top 10. They seem to have coalesced on using only FIPS-compliant algorithms. And there is agreement that protected health information (PHI) must be encrypted "in transit and at rest." But take that last one for example: does that mean that PHI must be encrypted when transiting across the internet, or across PCs within a LAN, or within devices in an embedded machine, or across chips on a board? That might depend. As technology changes, so will the expectations for what is secure. And without a bunch of congress-people having to argue over it and pass a new law.

    Ultimately, companies that want to be in compliance must to prove that they thought about these things and made sensible decisions. Yeah, it sucks that this is vague, but there really is no perfect regulatory solution here. That work is delegated to other agencies and the industry itself.

  • after making sure that the recommendations are useful the Regulators need to "suggest" that the insurance companies can use not following the guidelines would be a jolly way to give an out for the insurance companies (and depending on the scope lawyers love the words "Gross Negligence")

    so you don't include decent crypto in your CLOUD ENABLED drug pump or you have a backdoor in your Social Net Enabled Heart (with matching lack of safe default settings)

    NO CLAIM FOR YOU and hope that the next of kin does not h

"Confound these ancestors.... They've stolen our best ideas!" - Ben Jonson

Working...