Hacked images can fool algorithms that detect cancer

A new study revealed that Artificial intelligence programs that check medical images for evidence of cancer can be deceived by hacks and cyberattacks.

Researchers demonstrated that a computer program could add or remove evidence of cancer from mammograms, and those changes fooled both an AI tool and human radiologists.

Therefore, that could lead to an incorrect diagnosis, while an AI program helping to screen mammograms might say a scan is healthy when there are actually signs of cancer or incorrectly say that a patient does have cancer when they’re actually cancer free.

Such hacks are not known to have happened in the real world yet, but the new study adds to a growing body of research suggesting healthcare organizations need to be prepared for them.

Hackers are increasingly targeting hospitals and healthcare institutions with cyberattacks. Most of the time, those attacks target patient data (which is valuable on the black market) or lock up an organization’s computer systems until that organizations pays a ransom. Both of those types of attacks can harm patients by gumming up the operations at a hospital and making it harder for healthcare workers to deliver good care.

Whatever the reason, demonstrations like this one show that healthcare organizations and people designing AI models should be aware that hacks that alter medical scans are a possibility.

Models should be shown manipulated images during their training to teach them to spot fake ones, study author Shandong Wu, associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, said in a statement. Radiologists might also need to be trained to identify fake images.

“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks,” Wu said.

It is worth mentioning that around 70 percent of the manipulated images fooled that program — that is, the AI wrongly said that images manipulated to look cancer-free were cancer-free, and that the images manipulated to look like they had cancer did have evidence of cancer. As for the radiologists, some were better at spotting manipulated images than others. Their accuracy at picking out the fake images ranged widely, from 29 percent to 71 percent.