Imagine this scenario for a moment: You’re running a cockroach farm. You have cameras all over the place, and all the cameras are equipped with advanced image recognition technology. It is a rather boring day, until you reviewed the logs at the end of your shift. While the system showed that it has recorded zero instances escaping into the staff-only areas, it showed that it has recorded seven instances of giraffes. Curious about what happened, you decide to review the camera footage.
You are just beginning to play the first “giraffe” time stamp when you hear the skittering of millions of tiny feet.
Your image recognition algorithm was fooled by an adversarial attack. With special knowledge of your algorithm’s design or training data, or even via trial and error, the cockroaches were able to design tiny note cards that would fool the A.I. into thinking it was seeing giraffes instead of cockroaches. The tiny note cards wouldn’t have looked remotely like giraffes to people—they’d be just a bunch of rainbow-colored static. And the cockroaches didn’t even have to hide behind the cards—all they had to do was keep showing the cards to the camera as they walked brazenly down the corridor.
While this scenario is entirely fictitious, it has some truth in it: image recognition systems can be fooled, and they can be deceived easily.
Researchers have demonstrated that they could show an image recognition algorithm a picture of a lifeboat (which it identifies as a lifeboat with 89.2 percent confidence), then add a tiny patch of specially designed noise way over in one corner of the image. A human looking at the picture could tell that this is obviously a picture of a lifeboat with a small patch of rainbow static over in one corner. The A.I., however, identifies the lifeboat as a Scottish terrier with 99.8 percent confidence.
I wonder: when will recognition systems be perfected?
More details about this over at Slate.
(Image Credit: geralt/ Pixabay)