' Automating Healthcare: Current Challenges that Must be Addressed | MTTLR

Automating Healthcare: Current Challenges that Must be Addressed

Artificial Intelligence has the potential to improve health care systems worldwide. For example, AI can optimize workflow in hospitals, provide more accurate diagnoses, and bring better medical treatments to patients. However, medical AI also creates challenges that we, as a society, need to face. This article does not attempt a comprehensive listing but focuses on three important obstacles: safety, transparency, and privacy.

Regulating Safety

            It is of utmost importance that the use of AI is safe and effective. How do we ensure that AI trained in a particular setting is going to be reliable once deployed? As a real example, IBM Watson for Oncology uses AI algorithms to assess patients’ medical records and help physicians explore cancer treatment options. However, it has come under fire for not delivering on expectations and providing “unsafe and incorrect” recommendations. The problem appears to have been in the training, which is not based on an analysis of historical patient records, but rather only a few “synthetic cases.” This highlights the need for datasets that are reliable and valid. The software is only as good as the data its trained on, so the collection and curating of data can be the most critical part of deploying effective AI. Also, algorithms need further refinement and continuous updating. This separates AI from other medical devices, which do not have the ability to continuously learn.

            In terms of legal regulation, some medical AI-based products must undergo review by the FDA. A medical device is defined under section 201(h) of the Federal Food, Drug, and Cosmetic Act. While some software functions does not qualify (see Section 520(o)(1)(E)), AI that is “Software as a Medical Device” does fall within the FDA’s purview. However, AI devices differ from the typical drugs and medical devices that the FDA regulates, because AI is better viewed as an evolving system and not a fixed product. This shift in perspective is central to maximizing the safety and efficacy of AI in health care, but also poses challenged for agencies who are used to regulating products and not systems.

Opening the “Black Box”

            Related to safety, the way AI systems reach their recommendations are often opaque to physicians. This issue is described as transparency, otherwise known as the “black box” problem. AI developers should be sufficiently transparent so we can understand the kind of data used and any shortcomings of the software. Transparency promotes trust among stakeholders, including clinicians and patients, which is key to a successful implementation of AI in clinical practice.  For example, IBM kept Watson’s incorrect treatment recommendations secret for over a year.

However, requiring transparency can be a difficult task, especially as AI algorithms continue to evolve in complexity. We want to continue to open the “black box,” but insisting on high levels of algorithmic explainability may stifle innovation. For example, Google’s v3 model, which is more accurate than physicians at identifying diabetic retinopathy from fundus photographs and skin cancer from dermoscopic images, has 23 million parameters. This complexity makes it difficult to understand how models make a given prediction. Notwithstanding the opacity, we want to promote these beneficial innovations even when we don’t understand how a particular outcome is reached. Also, Transparency does not have to be all or nothing. For models with discrete and known tasks, like image analysis, laboratory testing, and natural language processing, there may be lower explainability expectations. However, we may be less comfortable with models that use black box algorithms for unexplored problems, like diagnostic or treatment decisions, where the risk of bias is higher.

Overcoming Privacy Hurdles

 

            The interaction of medical AI and privacy also poses difficult hurdles. From a legal perspective, the Health Insurance Portability and Accountability Act (HIPAA) is the key federal law to protect health data privacy. It is ill suited for protecting private data in AI based healthcare for at least two reasons. First, HIPAA has significant gaps in that it only covers health information generated by “covered entities” or their “business associates.” Second, HIPAA relies on de-identification as a privacy strategy, and AI enables reidentification of patients by finding patterns in data.

In the other direction, the current privacy laws also create barriers for developing the most effective AI. HIPAA limits what data can be used and shared and complying with those regulations is either expensive or prohibitive depending on the resources of the developer. As a result, AI, which requires massive amounts of data for training, faces higher risk of bias in the resulting data.

Conclusion

AI has proven that it can greatly benefit the medical field and is already revolutionizing healthcare. Safety, transparency, and privacy are difficult challenges, but they are not walls. Developers and regulators should continue to clinically deploy AI while attempting to minimize the potential harm.

Seth Raker is an Associate Editor on the Michigan Technology Law Review.

Submit a Comment

Your email address will not be published. Required fields are marked *