Predictive Analytics News

AI in Healthcare Poses Algorithmic Bias, Data Privacy, Security Risks

AI in healthcare has a multitude of clinical applications, but it also presents the potential for algorithmic bias, along with data privacy and security concerns.

AI in Healthcare Poses Algorithmic Bias, Data Privacy, Security Risks

By Jill McKeon

- Artificial intelligence (AI) in healthcare has shown the potential to revolutionize research and care delivery, but data privacy and security concerns, along with the potential for algorithmic bias, have raised concerns that cannot be ignored.

“When you're using large amounts of data, privacy and security are concerns. The more data you feed through an algorithm, the more risk there is that there could be breaches in privacy and security,” Linda Malek, partner at Moses & Singer and chair of the firm’s healthcare, privacy, and cybersecurity practice group, said in the latest episode of Healthcare Strategies.

“There is an inherent tension between the protection of privacy and security and the use of data in an AI context.”

Listen to the full podcast to hear more details. And don’t forget to subscribe on iTunesSpotify, or Google Podcasts.

In addition to privacy and security risks, the potential for algorithmic bias threatens to diminish the validity of some algorithms. For example, if a vulnerable population is underrepresented in a dataset, the algorithm may produce skewed and misleading results.

Legislators are increasingly turning their attention toward algorithmic bias and the need for more stringent regulations surrounding AI development practices and security and privacy risks.

“There is a place for legislation and ensuring that oversight authorities have the enforcement authority that they need in order to ensure that entities that are gathering all of this data are doing it in a responsible way,” Malek stated.

“But you don't want regulation that is so stringent that it impedes the ability to gather a lot of data.”

As legislators navigate the complexities of regulating AI while balancing bias, security, and privacy concerns, there are still steps that AI developers and users can take now to mitigate risk.

Malek suggested a handful of best practices focused on transparency, security, privacy, and accuracy when developing an AI algorithm. Developers should collect as much data as possible to ensure accuracy, and make sure that the data reflects diverse populations. Additionally, incorporating privacy, accuracy, and security into the algorithm by design is crucial to its effectiveness.

“From the standpoint of a healthcare practitioner, before you purchase, employ, or otherwise partner with an entity that is designing machine learning and AI, you need to ask a lot of questions,” Malek added.

“And there needs to be a real plan so that from the end-user standpoint, there's an understanding of what the algorithm does, what the algorithm gathers so that there's really a partnership between both the designer and the end-user in order to identify any biases that may be present.”

Do Not Sell or Share My Personal Information
©2012-2024 TechTarget, Inc. Xtelligent Healthcare Media is a division of TechTarget. All rights reserved. HealthITAnalytics.com is published by Xtelligent Healthcare Media a division of TechTarget.