In recent years, technology companies, developers, investors, and others have turned their attention to Artificial Intelligence (“AI”). Although little consensus exists regarding its definition, AI solutions generally leverage powerful computing algorithms to analyze data and produce outputs that mimic human intelligence at greater speed and scale than humanly possible. The speed at which companies have been able to develop and implement AI is due, in part, to a lack of laws and regulations governing its development and use. Yet without much legal guidance, companies are assuming great risk by adopting AI at such a rapid clip.
This brings us to a fundamental question about whether humans can trust something that cannot be fully understood. AI solutions, in many ways, constitute “black boxes.” These black boxes provide us with little insight about what goes on inside the box in the form of traceability or accountability. While the promise of AI is tempting, the unknown is perilous.
Nevertheless, the promise of AI itself is compelling. AI is attractive to a variety of companies solving a cornucopia of problems across a variety of industries. Most notably, AI innovation is booming in the healthcare space and is already seeing applications: inter alia, in radiology, coding, clinical decision support, and diagnosis. Nevertheless, these applications still carry risks. For example, in healthcare one of the key risks associated with AI relates to patient privacy. Inherent in the development of AI in healthcare is the need to train AI technologies using large quantities of patient data. But the not-so-long arm of the Health Insurance Portability and Accountability Act (“HIPAA”) does not extend to technology companies under many circumstances. To gap fill in the technology space where HIPAA may not apply, the Federal Trade Commission (“FTC”), has taken action against some companies for having lax privacy and data protection practices. In some cases, the FDA has regulated AI products. Despite the multitude of regulators exerting their respective jurisdiction over these technologies, no comprehensive laws or regulations exist to address AI risks holistically.
Unfortunately, patient privacy is just the tip of the iceberg. AI in healthcare also presents various risks related to patient safety, discrimination bias, fraud and abuse, cybersecurity, among others. The healthcare industry, in its continuing efforts to drive down costs and improve quality, will increasingly seek to leverage AI when rendering medical services and seeking reimbursement for such services. However, such applications of AI should be approached with caution to avoid malpractice risks, financial risks, reputational risks and other business risks. The question remains: How can these risks be managed effectively in the absence of direct and robust regulation?
To learn more, check out Artificial Intelligence in Healthcare: Legal and Ethical Issues!
*This article was prepared with help from Audrey Davis, an associate at Epstein Becker Green.