Why Aren't There Better Cybersecurity Regulations For Medical Devices? (vice.com) 99
citadrianne writes with an excerpt from Motherboard about some of the factors behind the long-decried security problems that surround medical hardware, and that will only become more pressing as some long-term treatments become both more portable (in the form of drug pumps, muscle stimulators, etc), more connected to sensors and controllers, and more dependent on software. There is a growing body of research that shows just how defenseless many critical medical devices are to cyberattack. Research over the last couple of years has revealed that hundreds of medical devices use hard-coded passwords. Other devices use default admin passwords, then warn hospitals in the documentation not to change them. A big part of the problem is there are no regulations requiring medical devices to meet minimum cybersecurity standards before going to market. The FDA has issued formal guidelines, but these guidelines "do not establish legally enforceable responsibilities." "In theory you could sell a bunch of medical devices without ever having gone through a security review," the well-known independent medical device security researcher Billy Rios told Motherboard.
Re: (Score:2)
If that's an ethernet network it might still be more secure than encrypted wireless... You'd at least have to physically be in the hospital at some point to exploit it.
In any case, it's kinda hard to imagine explaining how someone died because the doctor forgot their password.
Re:There is no security in health care. (Score:4, Interesting)
What this article is talking about is the vulnerability of BMDI devices, devices that stream data to the EMR or receive data from it. These would include bedside monitors, the pumps used to give infusions, anesthesia carts, etc. It's very important that the data be accurate and not be monkeyed with, obviously,
But if a hospital IT department, which is under resourced because of the declining reimbursement structure in healthcare (every year being asked to treat phenomenally more and more people on less and less funding, and keep facilities up to date, and keep equipment modern and safe, and keep up with all the regulatory changes), decides to make all the device keys "1234", that's not really the architecture's fault.
There are best safe practices in place, which are of course to verify the pump's settings before you turn it on, or make sure the vitals in the record match what you're seeing on the monitor, etc. But there are security vulnerabilities due to human tendencies, that even encryption won't solve.
Re: (Score:2)
Some of those vulnerabilities that would require physical access to exploit would be just as protected by the existing hospital security measures as current vulnerabilities (unplugging, pin-pricks, blocking lines, et cetera). Doesn't mean that they're not real, or that in some cases it wouldn't be frighteningly easy to do Very Bad Things, but we probably shouldn't treat the networked versions any more or less gingerly than the physical ones.
Re: (Score:3)
Most of these devices are either wireless or moving to wireless. Some of them must remain physically connected because an outage could result in patient harm, but more or less everything is moving to wireless for a variety of reasons.
-There are numerous reasons why in a certain area, cabling can't be on the floor or hanging, and the device must be able to move around.
-Some devices travel all over the campus and may be used in an area where wired networking isn't available or practical
-Most PCs being used on
Re: (Score:2)
Wireless is a nightmare in many hospitals. Lead in the walls near the MRI and radiotheraphy machines for example.
Re: (Score:2)
What this article is talking about is the vulnerability of BMDI devices, devices that stream data to the EMR or receive data from it.
But if a hospital IT department, which is under resourced because of the declining reimbursement structure in healthcare...
Well, it probably wouldn't be an issue with Hospital IT, but Clinical Engineering. Clinical Engineering deal with the items that touch patients and send data to the EMR, and the may or may not even use the network provided by Hospital IT. Not that the issues with funding aren't still there if not even more so. IME, it is rarely them that make device keys or passwords "1234" but rather the vendors or users. Often such "features" as backdoors and hardcoded admin passwords aren't even listed in the documentati
Re: (Score:2)
this reminds me of a story
years ago i was working at a very large, very prestigious hospital in boston. at the time they had no guest wifi. i needed a network connection so i set my laptop on a the nurses workstation and handed her one end of a long network cable and asked her to unplug the printer and plug in my wire. which she promptly did.
i was not in a lab connect, i was in a suit. i didn't know this nurse and she had no idea who i was. she simply removed one cable and plugged in mine.
needless to s
Re: (Score:2)
that should have said "lab coat" not "lab connect"....sorry
Dumb ethernet ports are dumb when security needed (Score:2)
i was not in a lab connect, i was in a suit. i didn't know this nurse and she had no idea who i was. she simply removed one cable and plugged in mine.
Shouldn't have mattered if she did. In my wife's office if you plug in an unknown machine to an ethernet port it simply won't work. The MAC address and some other stuff has to be registered to that particular port before it can connect. If I brought my laptop into her office, it would require non-trivial amounts of hacking to get it to connect to anything. While no security is bulletproof, lots of places don't even take basic precautions.
Re: (Score:2)
After a HIPAA violation, then they will pay for it.
They are regulations in place, there just isn't enforcement until after an incident happened.
The Flat open network, just as long as it is closed off to the rest of the world, is good enough. But when you start bringing in outside connections from vendors and other areas, and or using the same network for the public Wi-Fi then you may be getting in trouble.
Re: (Score:2)
After a HIPAA violation, then they will pay for it. They are regulations in place, there just isn't enforcement until after an incident happened.
The Flat open network, just as long as it is closed off to the rest of the world, is good enough.
Maybe. Maybe not. HIPAA is not a prescriptive standard. The operators of that network would have to have documented that they effectively assessed the risk of such a design, and then took "reasonable" measures to mitigate any significant risk. If they failed to do even that much (and that is still very common) they will be found to be in "willful neglect" and subject to even higher penalties.
Re: (Score:2)
Premium services (Score:1)
Big companies who do medical records (e.g. Microsoft, Google) care about security. The average company doing medical records cares about having a marketing buzzword that makes purchasers and patients feel secure. Hospitals generally don't give enough of a fuck because they don't understand it and it costs money, and it doesn't really cost them anything if they get broken into. It's not like many people will choose a different hospital or doctor.
Re: (Score:1)
Haha no.
Big companies who do medical records include EPIC, not Microsoft or Google.
Microsoft and Google are big tech companies who do medical records and care about security.
Re: (Score:2)
Because no one gives a shit about security (Score:2)
Even Dick Cheney had to have special consideration taken for his pacemaker, since the technology is so bad [popsci.com].
It isn't just device makers. In general most don't give a shit about security. From banking "apps" to healthcare "apps" - security is generally the last checkbox checked b
Re: (Score:2)
The problem is the age of most equipment.
Most medical Equipment talks HL7 v2. And is sent via a standard unencrypted port, and the more medical equipment it is easier to setup an other port, then to parse messages by their message source.
It isn't as much as not caring, but the age of most of this stuff is so old, that you need to keep backwards compatibility, as for the most part they were designed for Serial port communication, with a TCP/IP hack. When TCP/IP no longer was considered a passing fad.
Re: (Score:2)
Re: Because no one gives a shit about security (Score:2)
Re: (Score:2)
I think the major problem is that most software developers don't have a good enough grasp of security concerns. If the individual developers aren't thinking about security when implementing actual code, then it's hard to actually get secure systems. You can't just make a policy of "write secure code" if the developers don't have a clue how to do that. This is similar to making code easy to maintain, or making code that doesn't repeat. It takes a high level coder with years of experience before they figure
Not quite (Score:2)
I believe you are putting the cart before the horse. Nobody gives a shit about security in medical devices because it's not profitable to do so. If there was money to be had, you can bet your ass you would have FUD commercials running 24/7 and companies offering lifetime protection for just about everything.
People did not care too much about what us techie people said in regards to their digital security. We don't own enough media to be heard. But, WHOLLY BUCKETS OF CASH BATMAN! INFOMERCIAL! has people
The FDA is making this part of its clearance proce (Score:1)
I have a software application that was cleared by the FDA under the 510(k) class 2 classification. I actually had to submit cybersecurity documentation. The FDA is now doing it, but all the legacy applications will not have this in place.
Re: (Score:2)
Re: (Score:2)
Sometimes you just can't tell whether or not something is parody.
Mod parent up. (Score:2)
John McAfee for president! ;)
Re: (Score:1)
Regulations. Security, quality, reliability, good engineering.... All cost something.
It all comes down to blood and money. (Score:1)
Sadly any answer probably boils down to the fact that not enough people have been injured and/or died yet. Hang a few bodies around the problem and you can bet the government will start taking security on these devices much more seriously. Hang a few lawsuits on them and the companies might do something about it themselves.
Re: (Score:2)
Most medical devices need very simple very structured data exchange. Oddly much like household IoT. In both cases making an interface device that's fairly generic makes a lot of sense. We do this now for a lot of large industrial devices, harvesters for example have one of a slew of interfaces and companies make boxes to gather up data and relay instructions via cell phones wifi etc etc etc.
Sure it's not perfect security it's the hard candy outside approach. For for what amounts to an embedded machine i
Re: (Score:2)
Its a trade-off (Score:2)
Devices should be secure, or at least securable. As should internal hospital networks.
At the same time the risk from bio-medical network hacking remains theoretical. There's a small but serious risk that harm could spread on a wide scale, but so far no exploits have been made.
The risk of network issues during critical, potentially confusing, seconds-count scenarios is also real. Having some kind of network incompatibility or security interface issue could easily mean the difference between life and death
incentives (Score:1)
Re: (Score:2)
The FDA has no incentives to get regulations right. If something goes right, the FDA is not rewarded. If something goes wrong, the FDA is not liable.
Re: incentives (Score:1)
Big Medical Devices is very comfortable with regulation. Their Regulatory Affairs staffers are on a first name basis with the FDA staffers. And the high regulatory threshold keeps out upstarts. BMD can use 510k equivalency to get their next-Gen product approved at low cost. While owning the patents that keep upstart competitors from using the same approval process. The startups have to go through the whole clinical trials process.
Re: (Score:2)
aka regulatory capture.
Two possible reasons: (Score:2)
I figure there's two possible reasons for this:
1) The regulators are lazy/incompetent and haven't bothered.
2) The lobbyists for the medical devices industry have asked for it to keep profits higher.
But that there is little or no security in these things should be far more widely reported than it apparently is. Consumer electronics have really bad security; medical devices can't even be said to have security in a lot of cases.
Given what I've heard about the security and frequency of malware on hospital ne
Re: (Score:2)
I think that at some level, we just have to trust that the most people aren't psychotic. There's a lot of vulnerabilities we all live with on a daily basis. Most people don't walk down the street with armor, even though it would technically be quite easy for someone to come along and stab them with a knife. We just assume that people won't do that. The brakes on most cars could easily be mechanically disabled, but we don't go to any lengths to stop people from cutting the brake lines. What is it about c
Re: (Score:2)
Well, ignoring the specific definition of 'psychotic' here (which isn't how you're using it) ... the problem with comparing this to your car is there's a significantly higher level of people doing malicious things on the intertubes just for the hell of it.
So, yes, people aren't likely to go around cutting brake lines on cars just for amusement sake. But from a network security perspective? I've found assuming the intern
Re: (Score:2)
So perhaps the solution isn't to require device manufacturers to make them more secure. You can guarantee that they won't do it, or will mess something up along the way. Instead, why wouldn't the hospital put all the monitoring and other patient connected equipment on a separate network which isn't accessible from the outside because it isn't physically connected. For personal devices like pace makers and insulin pumps it might be less convenient to require things to be plugged in, but it would be a lot mo
CIA triad...in a different order (Score:4, Insightful)
If you work for a typically paper-pushing corporation, the priority on the "CIA triad' (confidentiality, integrity and availability) is usually: C, then A then I. If you work for a utility ("ICS"), it's often A then I then C. And if you work with medical devices, it's definitely I then A and maybe way down the line maybe C, because there's the HIPAA legal hammer to take care of all that. Hardly anyone in this stack understands authentication, but the key with at least the last two is that if someone's trying to use a machine or device and they are standing right next to it, they are assumed to be authorized. Unfortunately, that line of thinking leaks out into web interfaces, telnet and other craziness, and that's why it's all a mess at the moment.
Guidelines? Not Really. (Score:2)
Because ... (Score:2)
No governing will... (Score:1)
Because that would require regulation, and the GOP will not pass new regulations for fear of looking like 'big government' and giving their tea party opponents fuel to get them replaced in office with more 'conservative' people.
Re: (Score:2)
FDA Exists for the Corporations (Score:2)
... not for you - did your seventh-grade government school teacher perhaps try to tell you otherwise? Try to deal with empirical reality, not platitudes.
The entrenched interests that give high-paying jobs to former regulators are delighted that startups can't compete and that the products only have to be safe on paper, not subject to real competitive review (notice that Consumer Reports doesn't compare replacement needs - Consumers' Union does lobbying instead, unlike cars).
Gosh, back when I was doing medi
Be careful what you ask for (Score:5, Interesting)
I am a physician. While I don't implant pacemakers or defibrillators, I do take care of a number of patients who have these devices.
One critical issue here is accessibility of these devices. Suppose someone gets an implantable cardiac defibrillator for a failing heart. If the patient's cardiac status worsens, they device may activate and keep the heart beating. In these circumstances, it's critical that the physicians at the hospital have immediate and unrestricted access to the data on the device. Without this data, the physicians are at a serious disadvantage in trying to keep the patient alive.
To further complicate things, a patient in the midst of a cardiac event may not be able to provide a password. Even if the password is stored somewhere in the medical records, modern electronic record systems are often cumbersome to find such data. For example, if the device was implanted at a different hospital, the records typically have to be printed, faxed and then scanned in order to access the data. Those ridiculous steps translate into delays in care.
The real conundrum is whether a particular security modality is going to save more lives by thwarting hackers that it will cause deaths by delaying medical treatment.
Re: (Score:2)
If availability trumps all else and it is availability that makes medical devices more easy to hack, they why couldn't the hacker simply hack the device and take it offline at the time it's most needed? If every second matters, couldn't the hacker delay you for a few seconds?
Sure they could. But that's not how threat modelling work. The question here is will more hackers do that, and will the added security to thwart them actually lead to more deaths from doctors not being able to navigate that security in time to save the patient.
Risk isn't just the potential outcome of a certain situation, it's also the probability that that outcome will come to be.
It seems like what you are really saying is you need ready access to medical devices, but instead of building robust yet transparent security, your strategy is to "hope" a hacker never targets a patient of yours?
It's not a question of "building", we don't know how to build a secure system like that (and I say that as a security researcher)
Repeating Pattern (Score:2)
I don't know why, but security has been a problem every time a new class of device gains connectivity.
Robert Morris' internet worm got loose in 1988 - 27 years ago... WTF?
how do you make money from the hacks? (Score:1)
Re: (Score:2)
Pivots.
http://www.securityweek.com/me... [securityweek.com]
Doctors, Nurse, Technicians (Score:1)
I worked in security in the health care system for a short time and there was a ton of resistance to any security solutions we tried to implement. Some of it was that the medical staff felt it was impeding their ability to do their jobs, but it mostly seemed like they didn't like change.
Re: (Score:2)
I'm an anesthesiologist. I need IV pumps to work now. Not five minutes from now, but NOW. Could you make them more secure? Sure, you could require some kind of patient/drug/pharmacist-verification code, but I don't have the luxury of waiting for that to happen, because the patient needs it NOW. Nurses do the same thing on a slightly slower schedule. Go watch someone actively trying to die and a medical team trying to prevent that (a "code") and tell me your solutions.
FFS, I had the state
Re: (Score:2)
The right answer is to stop connecting important medical devices like IV pumps to insecure networks. If someone actually has to be standing next to the device in order to hack it, the risk of hacks goes way down.
Re: (Score:2)
what's the goal? (Score:2)
What's the goal of medical device software?
Currently, you have to prove that your target user can actually use your product without making mistakes. Make things too complicated in any way, and you're required to have a specialist on hand to turn the thing on. You don't decide what "too complicated" is, the FDA does.
The current solutions for maximum usability (hard coded passwords, no changing of passwords) are likely the result of existing regulation, not laziness on the part of medical device makers.
Medic
Cybersecurity is an oxymoron! (Score:1)
Shit's too easy to spoof.... well, maybe if you eliminate all inputs..
REGULATIONS??? (Score:2)
Anyone remotely familiar with the giant pile of manure known as HIPPA knows that government regulations in IT are not only ineffective but also total waste of time and money.
Because electronic data are inherently unsecurable (Score:2)
At it's very best, you have unhackable encryption for e-data. Now I will show you that that data can be hacked.
At some point, some human has to take some action to access the unencrypted form of that data. If that human can do it, then it can be done by another human, some other unauthorized way. That's called hacking.
There is no way around this. The problem with e-records is *you don't have to be physically present to steal them- they can be copied and they can be transported and the original source is non
One Word (Score:2)
Lobbyists
Security for devices (Score:2)
Not everyone writing software should be nor should they need to be a security expert.
I think the proper method here is to not trust devices to be secure, ever. Instead look to a provider of security software and/or hardware to put your devices behind.
A firewall device in front of every connected device would seem to be the best approach.
Just like every computer should have a firewall, every device should too.
Monetary incentive? (Score:2)
I'd say that security is weak because it would be difficult to profit from hacking medical devices. Regulation is weak because there have been no headline-grabbing incidents to bring the issue to the attention of regulators.
It would take a particular type of psycho to hack medical devices and harm people simply for the sake of harming people. That's probably what it will take before manufacturers improve security or government passes some knee-jerk regulations however.
It's complicated (Score:2)
As an area that I am very close to, I decided to sum up my comments in a single post rather than scatter replies to many of the uninformed, hyperbolic statements already made on this issue.
The FDA is not lazy or incompetent on this topic. I have personally worked with the people there who are driving this topic. There is a guidance document that was put through the draft/final review cycle on a fast track for FDA work (about 15 months between the two phases, which often takes 2-4 years).
http://www.fda.gov [fda.gov]
Security vs Ease of Use (Score:2)
It's scary how relaxed security is on med devices (Score:1)
In my last job I worked on the development of a medical diagnostic instrument. While not immediately life-threatening if compromised, lots of patient details could be stored on the system with no encryption. Now, it wasn't normally networked, so to get the information you had to stand in front of it. But here's where it got interesting: you could create an account to give yourself access to the data, and only a password was required - no username. Just one single string of characters. And because that was t