What Are the Risks of Using Artificial Intelligence in Healthcare?

August 19, 2019

In recent years, technology companies, developers, investors, and others have turned their attention to Artificial Intelligence (“AI”). Although little consensus exists regarding its definition, AI solutions generally leverage powerful computing algorithms to analyze data and produce outputs that mimic human intelligence at greater speed and scale than humanly possible. The speed at which companies have been able to develop and implement AI is due, in part, to a lack of laws and regulations governing its development and use. Yet without much legal guidance, companies are assuming great risk by adopting AI at such a rapid clip.

This brings us to a fundamental question about whether humans can trust something that cannot be fully understood. AI solutions, in many ways, constitute “black boxes.” These black boxes provide us with little insight about what goes on inside the box in the form of traceability or accountability. While the promise of AI is tempting, the unknown is perilous.

Nevertheless, the promise of AI itself is compelling. AI is attractive to a variety of companies solving a cornucopia of problems across a variety of industries. Most notably, AI innovation is booming in the healthcare space and is already seeing applications: inter alia, in radiology, coding, clinical decision support, AIHC (1)and diagnosis. Nevertheless, these applications still carry risks.  For example, in healthcare one of the key risks associated with AI relates to patient privacy. Inherent in the development of AI in healthcare is the need to train AI technologies using large quantities of patient data. But the not-so-long arm of the Health Insurance Portability and Accountability Act (“HIPAA”) does not extend to technology companies under many circumstances. To gap fill in the technology space where HIPAA may not apply, the Federal Trade Commission (“FTC”), has taken action against some companies for having lax privacy and data protection practices. In some cases, the FDA has regulated AI products. Despite the multitude of regulators exerting their respective jurisdiction over these technologies, no comprehensive laws or regulations exist to address AI risks holistically.

Unfortunately, patient privacy is just the tip of the iceberg. AI in healthcare also presents various risks related to patient safety, discrimination bias, fraud and abuse, cybersecurity, among others. The healthcare industry, in its continuing efforts to drive down costs and improve quality, will increasingly seek to leverage AI when rendering medical services and seeking reimbursement for such services. However, such applications of AI should be approached with caution to avoid malpractice risks, financial risks, reputational risks and other business risks. The question remains: How can these risks be managed effectively in the absence of direct and robust regulation?

To learn more, check out Artificial Intelligence in Healthcare: Legal and Ethical Issues!

*This article was prepared with help from Audrey Davis, an associate at Epstein Becker Green.

Share

Author Bio

Written by Alaap Shah

Alaap B. Shah is a Member of the Firm in the Health Care and Life Sciences practice, in the firm's Washington, DC, office. Mr. Shah: - Advises clients on privacy, cybersecurity, and data protection laws and regulations - Advises on cybersecurity and data breach matters - Advises clients on health care fraud and abuse matters and government investigations relating to health information technology - Counsels clients on digital health and data asset management strategies and related compliance issues His work focuses on defense and counseling of health care entities on legal and regulatory compliance issues around privacy, cybersecurity, and data asset management. He has extensive experience with legal issues related to health information technology, big data analytics, and digital health strategies. He provides compliance counseling, establishes and evaluates compliance programs, conducts privacy and security risk assessments, establishes compliant contracting strategies to build trust networks, and responds to data breaches. Mr. Shah is a Certified CSF Practitioner, a designation given by the Health Information Trust Alliance (HITRUST), an organization that provides training to develop and maintain effective security programs for health care and life sciences companies that comply with security laws, regulations, and standards, including HITECH, HIPAA, PCI, JCAHO, CMS, ISO, NIST, and various other federal, state, and business requirements. He is also recognized by the Healthcare Information and Management Systems Society (HIMSS) as a Certified Professional in Healthcare Information and Management Systems (CPHIMS). Mr. Shah is also recognized by the International Association of Privacy Professionals (IAPP) as a Certified Information Privacy Professional in the United States. Mr. Shah began his legal career at Epstein Becker Green. Before rejoining the firm in October 2017, he served as Senior Counsel and Chief Privacy and Security Officer at an oncology membership society where he strengthened enterprise-wide privacy and security, helped establish a big data initiative focused on improving quality of care by harnessing cancer patient medical information, and built data sharing trust networks among the oncology community. During law school, Mr. Shah worked with the U.S. Department of Health and Human Services (DHHS), Office of General Counsel, where he provided legal counsel and support to all agencies and programs under the Public Health Division of DHHS.

Comments

Related Posts