arcoxia 30 mg tablets

A study published in Nature Communications this week aimed to investigate whether adversarial images could fool an artificial intelligence model developed to diagnose breast cancer.

University of Pittsburgh researchers were able to simulate an attack that falsified mammogram images, leading the model – and human experts – to draw incorrect conclusions.  

“What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis – which is a big patient safety issue, abana chapters ” said senior author Shandong Wu, associate professor of radiology, biomedical informatics and bioengineering at Pitt, in a statement.   

“By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust,” said Wu.  

WHY IT MATTERS  

Researchers observed that deep learning models are increasingly relied on in diagnostic capacities to augment human expertise.

To that end, they write, “It is imperative to build trustworthy, reliable, and safe AI systems for clinical deployment.”  

One way to measure such safety is to evaluate an AI model’s behaviors in the face of cyberattacks – namely, “adversarial images,” which are designed to fool models by using tweaked images or other inputs.  

For their study, the Pitt team used mammogram images to train an algorithm to distinguish breast cancer-positive cases from negative ones.   

Next, the researchers developed generators to create intentionally misleading data by “inserting” cancerous regions into negative images or “removing” regions from positive images.  

The trick worked: The model was fooled by 69.1% of the fake images.   

The researchers then recruited five human radiologists to spot whether images were real or fake. Results varied: Depending on the person, the experts ranged from 29% to 71% accuracy on spotting the images’ authenticity.  

“Certain fake images that fool AI may be easily spotted by radiologists. However, many of the adversarial images in this study not only fooled the model, but they also fooled experienced human readers,” said Wu.  

“Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis.”  

The researchers noted that high-resolution images had a greater chance of fooling the model and were harder for human readers to spot as fake.

They noted that motivations for adversarial attacks include monetary gain, insurance fraud and the appearance of favorable clinical trial outcomes.   

The team emphasized the importance of shoring up AI safety measures in this regard.  

“One direction that we are exploring is ‘adversarial training’ for the AI model,” said Wu. “This involves pre-generating adversarial images and teaching the model that these images are manipulated.”  

THE LARGER TREND  

As the Pitt research team explained, the use of AI and machine learning to analyze medical imaging data has jumped in recent years.  

But the field also carries unique challenges. In addition to the safety concerns raised in the Nature Communications article, experts have cited difficulties with gathering data at scale, obtaining diverse information and accurate labeling.  

“You need to understand how your AI tools are behaving in the real world,” said Elad Benjamin, general manager of Radiology and AI Informatics at Philips, during an Amazon Web Services presentation earlier this year.   

“Are there certain subpopulations where they are less effective? Are they slowly reducing in their quality because of a new scanner or a different patient population that has suddenly come into the fold?”  

ON THE RECORD  

“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks, ensuring AI systems function safely to improve patient care,” said Pitt’s Wu in a statement.

 

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article