Connecting everyday devices to the cloud has become commonplace—one can brew coffee, turn on the lights, and heat a room using an app on their smartphone. And now, doctors can update and monitor data collected by medical devices implanted in a patient’s body through a similar connection.
The benefit of this advancement is easy to observe: remote updates to the device’s software allow for personalized treatment without surgery. It is convenient for both the patient and the doctor. But with this comes a downside. Hackers can use unsecured wireless connections to hack into implanted devices. These devices can then be individually manipulated—insulin pumps can be programmed to send an excess amount of medication; pacemakers to send an extra shock to the heart. These devices can also be a gateway to infiltrate entire medical systems. Once inside the network, hackers have the ability to install ransomware like WannaCry to a hospital’s database or steal the healthcare data of all patients in the computer system.
The medical industry is aware of this problem. The FDA has offered guidance to mitigate it and even offered an Action Plan to medical device companies. White hat hackers make vulnerabilities known and display malfunctions at conferences. Hollywood has even caught on and incorporated a pacemaker assassination hack into an episode of Homeland.
Discretion over how to handle potential hacks, however, still falls to medical device companies. This is problematic as companies may choose not to tell patients about bugs or update devices even after hacks are discovered. In 2016, Johnson & Johnson chose to disclose a security vulnerability in its insulin pump system that could allow hackers to overdose diabetic patients with insulin. The company ensured patients that the risk was low, but admitted that the insecurity still affected 114,000 patients. The public largely ignored the gravity of this number, focusing instead on the fact that the company announced it at all. Similarly, Medtronic famously engaged in a two-year battle with security researchers who had exposed a vulnerability in the company’s pacemakers. Medtronic initially denied the potential for an attack and refused to take action. Years later, after the risk became too large to ignore, the company finally chose to disable remote updates for their pacemakers.
Where does that leave us? Hundreds of thousands of these types of medical devices are implanted every year, including 370,000 cardiac pacemakers. This creates a large, vulnerable group. Hospitals could sue medical device companies for opening their systems to potential breaches. Patients could sue for malfunctioning devices. Both of these courses of action, however, require a harm to be suffered first. Another option is policy change for the industry, potentially transforming the FDA guidelines from suggestions to requirements.
One thing is certain: Cybersecurity has taken a backseat to medical purpose and innovation. Failing to penalize companies for noncompliance with the FDA’s recommendations will only exacerbate this problem. Action must be taken to protect patients and force medical device companies to value patient safety.*
*Fiona Gaul is an associate editor on the Michigan Technology Law Review.