TechXplore August 5, 2020
To help individuals inoculate their images against unauthorized facial recognition models, researchers at the University of Chicago have developed a system called Fawkes. It helps individuals add imperceptible pixel-level changes (they call “cloaks”) to their own photos before releasing them. When used to train facial recognition models, the “cloaked” images produce functional models that consistently cause normal images of the user to be misidentified. In experiments Fawkes provided 95+% protection against user recognition regardless of how trackers train their models. They have shown that Fawkes is robust against a variety of countermeasures that try to detect or disrupt image cloaks…read more. Open access TECHNICAL ARTICLE
Image cloaking tool thwarts facial recognition programs
Posted in Biometrics and tagged Cloaking, cyber security, Facial recognition, Imaging technology, Pattern recognition.