With new technology and possibilities, there are also new issues that come with it. Healthcare Industry is next in line for AI/ML applications and there is a range of problems that must be addressed. The fact that there are issues does not automatically mean that this new thing should not be pursued. However, it is important for the issues to be recognized and addressed in an attempt to minimize them as much as possible. 

According to this article on Analytic Steps, Mallika Rangaiah shares a range of potential concerns with the use and adoption of AI applications in the healthcare industry. 

“As per a report from the Brookings Institution, there are several risks associated with AI in healthcare that need to be addressed. Below are a couple of the threats which had been identified by the Institution’s report : 

1. Errors and Injuries:

One of the biggest risks that AI in healthcare holds is that the AI system might at times be wrong, for instance, if it suggests a wrong drug to a patient or makes an error in locating a tumor in a radiology scan, which could result in the patient’s injury or dire health-related consequences. 

AI errors are potentially different for at least two reasons. While errors can obviously take place by human medical professionals as well yet what makes this crucial is that an underlying error, an error in an AI system could lead to injuries for thousands of patients.

2. Data availability

Yet another threat posed by AI systems is that training these systems requires massive amounts of data from multiple sources which include pharmacy records, electronic health records, insurance claims records, etc.  

Since the data is fragmented and patients often see different providers or switch insurance companies the data gets complicated and less comprehensible as a result of which the risk of error and the cost of data collection escalates.

3. Privacy concerns

The collection of huge datasets and the exchange of data between health systems and AI developers to enable AI systems leads to many patients believing that this could violate their privacy, leading to the filing of lawsuits. (Related blog – AI in the law industry)

Another area where the employment of AI systems raises this issue is that AI has the capability of predicting private information about patients even if the patient has never been given the information. 

For instance, Parkinson’s disease could be detected by an AI system with the trembling on a computer mouse even if the person hasn’t revealed the information to anyone else which could be considered a violation of privacy by the patient.

4. Bias and inequality

Since AI systems absorb and learn through the data with which they are trained, they can also absorb the biases of the available data. For example, if the data incorporated in AI is mainly collected in academic medical centers, the developing AI systems will have less awareness about, and as a result, will treat less effectively, patients from populations that do not typically frequent academic medical centers. 

5. Could lead to shifts in the profession

In the long run, the employment of AI systems could lead to shifts in the medical profession. Particularly in areas like radiology where most of the work gets automated. 

This raises the concern that a high degree of employment of AI might lead to a fall in human knowledge and capacity over the years, making providers fail in detecting AI errors as well as in the further development of medical knowledge. (As it is a threat, or possibly a misconception, check out similar myths about Artificial Intelligence)

Read on to find out more about the possible applications in this space here. For further information on this topic, join our weekly series “AI for All” on Clubhouse

Pin It on Pinterest