Autonomous ARTIFICIAL INTELLIGENCE in Medical Imaging.

 

Concerns of Autonomous ARTIFICIAL INTELLIGENCE in Medical Imaging

 


As AI systems are developed and approved in the wider medical imaging space, challenges including but not limited to regulatory,privacy and ethical concerns will start to surface As AI systems are developed and approved in the wider medical imaging space, challenges including but not limitedto regulatory, privacy and ethical concerns will start to surface. There are many key questions to be addressed.....

Intelligent and autonomous algorithms are making long strides to make clinical and workflow improvements in medicine. And when we look deep within healthcare, one of the most promising areas of innovation is in the application of Artificial Intelligence (AI) in radiology and medical imaging. Over the past few years, there has been a rapid surge in the AI adoption and automated algorithms within healthcare and life science realm. While AI is on the path of maturing into a reliable, robust and stable technology, many opponents caution against it, claiming society is far from using it as a part of the mainstream healthcare for providing and delivering care to the patients. 

Radiology and medical imaging like other branches of medicine requires comprehensive medical context before the patients’ medical state could be accurately evaluated. AI and advanced computer aided diagnosis have always been welcomed in the field of medicine as it has supported the physicians in clinical support, smart review of the medical literature, more discrete real world outcome data points- ultimately leading physicians and care delivery team to dedicate more time to the actual treatment and delivery of care. However, lately there has been a rather steep uptake in autonomous detection with minimal to no physician intervention, making the nascent technology come under the spotlight. 

Back in 2018, FDA approved the first AI enabled medical imaging device- IDX-DR that could detect retinopathy using retinal images and without any physician involvement. With that led to a foray of AI- enabled medical devices and technology that could be used by the technicians and auxiliary support without a thorough and labor-intensive training, something that we typically assumed with physicians and specialists working in the field. Similar to IDX- DR, echocardiogram–a San Francisco-based Caption Health’s Artificial Intelligence box was approved by FDA to capture ultrasound images of the heart. Heart is a highly complex organ pumping blood through millions of small and inaccessible capillaries throughout the body. It requires cardiologists to undergo many years of rigorous training to correctly image and diagnose abnormalities, leave treatment for these abnormalities. However with echocardiogram, what was typically a highly specialized job requiring niche expertise and many years, if not decades of experience was replaced by an AI algorithm operated by nurses with just a few days of training. 

The FDA approved these products through its newly introduced “de novo” pathway after research showed that the technologies yielded similar or better results, despite lack of sufficient longitudinal clinical studies. In the case of echocardiogram, FDA approved the AI algorithm by stating that the images of the heart produced were of highest level of precision and quality, even when operated auxiliary staff. However, the bigger question is that how will FDA regulate the use and implementation of such AI-enabled medical diagnostics outside its initially approved settings. Their reliability, safety and accuracy will be compromised when the original algorithms or intended use is modified beyond their initial approval. 

As AI systems are developed and approved in the wider medical imaging space, challenges including but not limited to regulatory, privacy and ethical concerns will start to surface. Below is a snapshot of some key questions that will define how answers to these questions will shape technology and AI adoption in healthcare imaging. 

1. What are the various risks associated with false positive and false negative tests associated

with AI implementation in medical imaging  and radiology? What are the current protocols

to mitigate these risks?

2. How is the data kept secured, especially in light of plethora of data breaches that occurred just within the last decade?

3. What are FDA guidelines on how and when to  implement AI guided medical diagnosis and

imaging, and more importantly what situations  recommend not implementing these AI

systems?

4. What are the minimal and necessary skills,expertise required to implement, monitor and

test AI guided medical diagnostics? What level of human oversight is considered critical and

necessary for these implementations? 

5. How do physicians and caregivers plan to continuously monitor AI driven care?

 Another aspect of AI implementation is around the legal challenges created by its inadvertent use in medicine. AI algorithms, as a part of the application are not subjected to same scrutiny as those faced by medical devices. The ‘Product Liability Directive’ as those institutionalized in some of the European countries may further complicate medico-legal cases where failure to use such AI enabled applications may confound the already complex medical negligence lawsuits.

 Although there are different algorithms for diagnosis from those applied to treatment, all these underlying algorithms still need quality checks, validation and approval. And in order to thoroughly test the veracity of claims and validate the algorithms, large amount of data is paramount. And hence rigorous evaluation and well-thought out regulatory guidelines are needed to establish automated AI and automated technology as legit.

Conclusion:- Technological challenges in AI, its application for medical diagnosis, adoption of technology by healthcare consumers and more than anything our response to ethical challenges posed by AI, are evolving at an unprecedented speed. With these early stages of AI adoption, its ramifications are not only difficult to comprehend but also will be complex to manage when those happens. As we understand and comprehend some of these ethical challenges, it is pivotal to not only share them with the broader audience but also open a dialogue for a fair representation from the different strata of the society. 

No comments:

Post a Comment