Why did my classifier just mistake a turtle for a rifle?

MIT News  July 31, 2019 We know instinctively that people and machines see the world differently, but the paper showed that the difference could be isolated and measured. Researchers at MIT have shown that a computer vision model could be compromised in a so-called black-box attack by simply feeding it progressively altered images until one caused the system to fail. Recently they highlighted multiple cases in which classifiers could be duped into confusing cats and skiers for guacamole and dogs, respectively. They trained a model to identify cats based on “robust” features recognizable to humans, and “non-robust” features that humans […]