Menu
Researchers design patch to make people ‘virtually invisible’ to AI detectors

Researchers design patch to make people ‘virtually invisible’ to AI detectors

Will next be printing 'adversarial patch' on T-shirts to test effectiveness

Researchers at Katholieke Universiteit Leuven have demonstrated how a small, square, printed patch can be used as “cloaking device” to hide people from AI object detectors.

In recent years, Convolutional Neural Networks (CNNs) have become highly effective at recognising people in images and video feeds. They learn what a human looks like by examining thousands of images of people.

But they can be fooled – the researchers demonstrate in their paper Fooling automated surveillance cameras: adversarial patches to attack person detection – with a 40cm by 40cm ‘adversarial patch’.

In their paper, researchers Simen Thys, Wiebe Van Ranst and Toon Goedeme focus on the popular YOLOv2 object detector. They use a number of tricks to fool the detector.

“We did this by optimising an image to minimise different probabilities related to the appearance of a person in the output of the detector. In our experiments we compared different approaches and found that minimising object loss created the most effective patches,” they write.

They then printed out their optimised patches, and tested them out by filming people holding them. The researchers found the patches worked “quite well” as long as they are positioned correctly.

“From our results we can see that our system is able significantly lower the accuracy of a person detector…In most cases our patch is able to successfully hide the person from the detector. Where this is not the case, the patch is not aligned to the center of the person,” the researchers said.

The technique could be used to “maliciously to circumvent surveillance systems” the researchers note, allowing intruders to “sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera”.

The group is now planning to apply the patches to clothing.

“We believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras,” the researchers said.

Future work will focus on making the patches more robust and transferable, given they do not work well with different detection architectures, like Faster R-CNN.

The angle and placement of the patch made a difference to the AI's ability to detect a person
The angle and placement of the patch made a difference to the AI's ability to detect a person

There is growing interest in real world adversarial attacks – which are ideally both physically realisable and inconspicuous – on AI detection systems.

In 2016, researchers from Carnegie Mellon University and University of North Carolina, showed how a printed pair of eyeglass frames could be used to beat facial recognition systems.

“When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognised or to impersonate another individual,” the researchers said.

Camouflage from face detection has also entered the fashion world.

In 2017, a group from the University of Washington, the University of Michigan, Stony Brook University, and the University of California, Berkeley demonstrated how to fool image classifiers by making alterations to stop signs in the real world.

By using some black and white stickers – designed to mimic graffiti, and so “hide in the human psyche” – the researchers were able to make a deep neural network-based classifier see a stop sign as a speed limit sign, 100 per cent of the time.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags researchcnnfacial recognitionAINeural Networksconvolutional neural networksRekognitionimage detection

More about CNNMellon

Show Comments
[]